ukncsc / lme

Logging Made Easy

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] Kibana server is not ready yet

roberthwl opened this issue · comments

Describe the issue
A clear and concise description of what the issue is.
Stopped working after resolving fs issue - unable to get it to recover stuck at Kibana server is not ready yet
Verison docker.elastic.co/elasticsearch/elasticsearch:7.16.3 docker.elastic.co/kibana/kibana:7.16.3 docker.elastic.co/logstash/logstash:7.16.3 all running
odd errors and been unable to resolve the fault despite rollback snapshots
kibana logs Error: Failure installing common resources shared between all indices. Timeout: it took more than 1200000
logstash logs Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
elasticsearch logs name": "es01", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started

Been working fine up until the last week

Hello,

Can you provide more of the log files? It looks like whilst ES is up logstash is unable to authenticate succesfully.
What version of elasticsearch were you running before 7.16.3?

Thanks,
Duncan

versions
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/elasticsearch/elasticsearch 66c29cde15ce 4 weeks ago 646MB
docker.elastic.co/logstash/logstash b2e5a301659d 4 weeks ago 997MB
docker.elastic.co/kibana/kibana 8c46ec23123e 4 weeks ago 1.3GB
docker.elastic.co/kibana/kibana 7.13.4 9ebf2fabb05d 6 months ago 1.35GB
docker.elastic.co/elasticsearch/elasticsearch 7.13.4 1c9757417a29 6 months ago 1.02GB
docker.elastic.co/logstash/logstash 7.13.4 7dfab72419a4 6 months ago 965MB

elasticsearch keeps restarting
lme_elasticsearch.1.z8917x0seb58@LMESyslog.wl.local | Invalid initial heap size: -Xmsg
lme_elasticsearch.1.z8917x0seb58@LMESyslog.wl.local | Error: Could not create the Java Virtual Machine.
lme_elasticsearch.1.z8917x0seb58@LMESyslog.wl.local | Error: A fatal exception has occurred. Program will exit.
lme_elasticsearch.1.xqzv7fs7mvlu@LMESyslog.wl.local | Invalid initial heap size: -Xmsg
lme_elasticsearch.1.xqzv7fs7mvlu@LMESyslog.wl.local | Error: Could not create the Java Virtual Machine.
lme_elasticsearch.1.xqzv7fs7mvlu@LMESyslog.wl.local | Error: A fatal exception has occurred. Program will exit.
lme_elasticsearch.1.mpwp8m7v66fi@LMESyslog.wl.local | Invalid initial heap size: -Xmsg
lme_elasticsearch.1.mpwp8m7v66fi@LMESyslog.wl.local | Error: Could not create the Java Virtual Machine.
lme_elasticsearch.1.mpwp8m7v66fi@LMESyslog.wl.local | Error: A fatal exception has occurred. Program will exit.
lme_elasticsearch.1.j19uknu26vwy@LMESyslog.wl.local | Invalid initial heap size: -Xmsg
lme_elasticsearch.1.j19uknu26vwy@LMESyslog.wl.local | Error: Could not create the Java Virtual Machine.
lme_elasticsearch.1.j19uknu26vwy@LMESyslog.wl.local | Error: A fatal exception has occurred. Program will exit.

so java at fault here .... Invalid initial heap size: -Xmsg ...changing options in the docker-compose doesnt fix this errro stays the same

docker stack services lme
ID NAME MODE REPLICAS IMAGE PORTS
mojta0om3ogu lme_elasticsearch replicated 0/1 docker.elastic.co/elasticsearch/elasticsearch:7.16.2 *:9200->9200/tcp
zg9fwpas18bn lme_kibana replicated 1/1 docker.elastic.co/kibana/kibana:7.16.2 *:443->5601/tcp
06d8uy7hb5q4 lme_logstash replicated 1/1 docker.elastic.co/logstash/logstash:7.16.2 *:5044->5044/tcp, *:12514->12514/tcp

es restarting due to jvm event Invalid initial heap size: -Xmsg

ok ran deploy update and resolved jvm fault

now have

docker stack ps lme
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wn3qmkkpm0v6 lme_elasticsearch.1 docker.elastic.co/elasticsearch/elasticsearch:7.16.3 LMESyslog.wl.local Running Running 5 minutes ago
w6lqaudtxbtz lme_kibana.1 docker.elastic.co/kibana/kibana:7.16.3 LMESyslog.wl.local Running Running 5 minutes ago
yzlnm5iu3tfy lme_logstash.1 docker.elastic.co/logstash/logstash:7.16.3 LMESyslog.wl.local Running Running 5 minutes ago

still kibana not ready on url and logs as follows

kibana log

lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:51+00:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin "metricsEntities" is disabled."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["info","http","server","Preboot"],"pid":8,"message":"http server running at https://0.0.0.0:5601"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Kibana is configured to authenticate to Elasticsearch with the "kibana" user. Use a service account token instead."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Use Kibana application privileges to grant reporting privileges. Using "xpack.reporting.roles.allow" to grant reporting privileges is deprecated. The "xpack.reporting.roles.enabled" setting will default to false in a future release."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Enabling or disabling the Security plugin in Kibana is deprecated. Configure security in Elasticsearch instead."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"User sessions will automatically time out after 8 hours of inactivity starting in 8.0. Override this value to change the timeout."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["info","plugins-system","standard"],"pid":8,"message":"Setting up [113] plugins: [translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["info","plugins","taskManager"],"pid":8,"message":"TaskManager is identified by the Kibana UUID: fac5b726-e3a5-4325-94aa-c7f73e432668"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:52+00:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:53+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":8,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:53+00:00","tags":["warning","plugins","actions"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:53+00:00","tags":["warning","plugins","alerting"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:53+00:00","tags":["info","plugins","ruleRegistry"],"pid":8,"message":"Installing common resources shared between all indices"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T17:59:54+00:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.5.2111\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T18:00:01+00:00","tags":["error","elasticsearch-service"],"pid":8,"message":"Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 10.0.1.2:9200"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T18:00:44+00:00","tags":["warning","process"],"pid":8,"message":"Error [ProductNotSupportedSecurityError]: The client is unable to verify that the server is Elasticsearch due to security privileges on the server side. Some functionality may not be compatible if the server is running an unsupported product.\n at /usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:576:19\n at onBody (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:369:9)\n at IncomingMessage.onEnd (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:291:11)\n at IncomingMessage.emit (node:events:402:35)\n at endReadableNT (node:internal/streams/readable:1343:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)"}
lme_kibana.1.w6lqaudtxbtz@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T18:00:45+00:00","tags":["error","elasticsearch-service"],"pid":8,"message":"Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: missing authentication credentials for REST request [/_nodes?filter_path=nodes..version%2Cnodes..http.publish_address%2Cnodes.*.ip]"}

logstash log
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:06:57,258][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:02,259][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:02,263][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:07,266][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:07,267][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:12,274][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:12,275][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:17,281][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:17,282][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:22,287][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:22,287][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:27,294][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:27,294][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:32,300][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:32,304][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:37,318][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:37,320][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:42,325][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:42,329][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:47,332][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:47,333][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:52,339][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:52,339][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:57,345][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.yzlnm5iu3tfy@LMESyslog.wl.local | [2022-01-17T18:07:57,347][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}

es logs

lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]",
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | "at java.lang.Thread.run(Thread.java:833) [?:?]"] }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,253Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.fleet-enrollment-api-keys-7] (alias [.fleet-enrollment-api-keys]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,253Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.fleet-policies-7] (alias [.fleet-policies]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.fleet-artifacts-7] (alias [.fleet-artifacts]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.transform-internal-007] (alias [.data-frame-internal-3]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.security-7] (alias [.security]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.geoip_databases] (alias [null]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.ml-config] (alias [null]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,254Z", "level": "INFO", "component": "o.e.i.SystemIndexManager", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Index [.tasks] (alias [null]) mappings are not up-to-date and will be updated", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,264Z", "level": "INFO", "component": "o.e.x.c.m.j.p.ElasticsearchMappings", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Mappings for [.ml-annotations-6] are outdated [7.16.2], updating it[7.16.3].", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,357Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.fleet-enrollment-api-keys-7/z2wRWiMaTomYeShZs3ZcGQ] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,362Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.fleet-policies-7/xc0JAYrXSGKWliuLIHqHPQ] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,368Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.fleet-artifacts-7/7N0eMsjjSfuNiAVeN9-KhQ] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,379Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.transform-internal-007/Dsm1TDDlQLmwLw-SBaOv8A] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,398Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.security-7/UWl5sTZySM2FRDrTADV9Ww] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,403Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.geoip_databases/WqUVgcgYTuq7z7Cm6vJ7oQ] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,552Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.ml-config/gRz5-_6gR2OYfY60r10i-A] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,560Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.tasks/GlYWeuPBRdaizcd6GwVGyA] update_mapping [task]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:14,564Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[.ml-annotations-6/V7WA5Q0KSA657S7pSv2myg] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:21,129Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[winlogbeat-7.11.1-2022.01.05-000011/Plen_UW5Td27GBj_CbDAGg] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:21,342Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[winlogbeat-7.11.1-2022.01.05-000011/Plen_UW5Td27GBj_CbDAGg] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:28,265Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[filebeat-7.8.0-2021.01.14-000001][0]]]).", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:47,729Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[winlogbeat-7.11.1-2022.01.05-000011/Plen_UW5Td27GBj_CbDAGg] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }
lme_elasticsearch.1.wn3qmkkpm0v6@LMESyslog.wl.local | {"type": "server", "timestamp": "2022-01-17T18:01:48,216Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "loggingmadeeasy-es", "node.name": "es01", "message": "[winlogbeat-7.11.1-2022.01.05-000011/Plen_UW5Td27GBj_CbDAGg] update_mapping [_doc]", "cluster.uuid": "Y_6Q_8e9RC2Pi5Hc0t-33A", "node.id": "-1X4S9hsTjmU-n6uF9a3IQ" }

retrived login detail from older snapshot ...still same errors
docker service logs lme_kibana --tail 25
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:54+00:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin "metricsEntities" is disabled."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["info","http","server","Preboot"],"pid":8,"message":"http server running at https://0.0.0.0:5601"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Kibana is configured to authenticate to Elasticsearch with the "kibana" user. Use a service account token instead."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Use Kibana application privileges to grant reporting privileges. Using "xpack.reporting.roles.allow" to grant reporting privileges is deprecated. The "xpack.reporting.roles.enabled" setting will default to false in a future release."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Enabling or disabling the Security plugin in Kibana is deprecated. Configure security in Elasticsearch instead."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"User sessions will automatically time out after 8 hours of inactivity starting in 8.0. Override this value to change the timeout."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","config","deprecation"],"pid":8,"message":"Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["info","plugins-system","standard"],"pid":8,"message":"Setting up [113] plugins: [translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["info","plugins","taskManager"],"pid":8,"message":"TaskManager is identified by the Kibana UUID: 8593958b-7da1-46e6-a630-3f8855b7184e"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":8,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:55+00:00","tags":["warning","plugins","actions"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:56+00:00","tags":["warning","plugins","alerting"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:56+00:00","tags":["info","plugins","ruleRegistry"],"pid":8,"message":"Installing common resources shared between all indices"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:00:57+00:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.5.2111\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:01:02+00:00","tags":["error","elasticsearch-service"],"pid":8,"message":"Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 10.0.1.2:9200"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:01:39+00:00","tags":["warning","process"],"pid":8,"message":"Error [ProductNotSupportedSecurityError]: The client is unable to verify that the server is Elasticsearch due to security privileges on the server side. Some functionality may not be compatible if the server is running an unsupported product.\n at /usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:576:19\n at onBody (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:369:9)\n at IncomingMessage.onEnd (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:291:11)\n at IncomingMessage.emit (node:events:402:35)\n at endReadableNT (node:internal/streams/readable:1343:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)"}
lme_kibana.1.ldedj17l456o@LMESyslog.wl.local | {"type":"log","@timestamp":"2022-01-17T19:01:41+00:00","tags":["error","elasticsearch-service"],"pid":8,"message":"Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: missing authentication credentials for REST request [/_nodes?filter_path=nodes..version%2Cnodes..http.publish_address%2Cnodes.*.ip]"}

docker service logs lme_logstash --tail 25
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:06:56,190][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:00,255][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:01,197][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:05,261][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:06,204][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:10,269][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:11,211][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:15,275][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:16,217][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:20,281][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:21,223][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:25,287][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:26,229][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:30,292][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:31,235][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:35,298][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:36,241][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:40,304][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:41,246][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:45,309][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:46,252][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:50,317][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:51,259][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:55,322][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.rvrcq0m9wxbj@LMESyslog.wl.local | [2022-01-17T19:07:56,265][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}

as mysteriously as it stopped now it resumed
still have error in logstash - dont seem to affect ops though
docker service logs lme_logstash --tail 5
lme_logstash.1.x20oxf0oy8uc@LMESyslog.wl.local | [2022-01-18T05:58:44,468][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.x20oxf0oy8uc@LMESyslog.wl.local | [2022-01-18T05:58:44,778][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.x20oxf0oy8uc@LMESyslog.wl.local | [2022-01-18T05:58:49,470][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.x20oxf0oy8uc@LMESyslog.wl.local | [2022-01-18T05:58:49,782][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}
lme_logstash.1.x20oxf0oy8uc@LMESyslog.wl.local | [2022-01-18T05:58:54,473][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://elasticsearch:9200/'"}

docker stack ps lme
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
e2aid0ujnqke lme_elasticsearch.1 docker.elastic.co/elasticsearch/elasticsearch:7.16.3 LMESyslog.wl.local Running Running 5 hours ago
f5i0k2r4yn98 lme_kibana.1 docker.elastic.co/kibana/kibana:7.16.3 LMESyslog.wl.local Running Running 5 hours ago
x20oxf0oy8uc lme_logstash.1 docker.elastic.co/logstash/logstash:7.16.3 LMESyslog.wl.local Running Running 5 hours ago

Hi,

The number of authentication errors shown in the logs above suggest that the password listed in your docker-compose-stack-live.yml is incorrect.
This would seem the likely culprit considering the ram value was set wrongly in it previously, along with the encryptionKey errors suggesting that the docker-compose-stack-live file has somehow been altered incorrectly.

If you look in your docker-compose-stack-live file does it contain a random looking password for the elasticsearch_password setting in the kibana section?

Thanks,
Duncan

As I had various vm snapshots i was able to trace back to previous yml setups
Updates apart there are a few changes in the yml file that I can review...def the ram value had changed and this caused the jvm heap event and overall elk failure!
There is a random looking passwd and I dont recall setting that .... as i have a previous yml I can review and see if can then resolve the logstash error 401
The main log capture is working and able to access the stack so that one good thing. As to how it got into such a state is another!! That needs to be looked into in more depth

Hi @roberthwl, did you ever get to the bottom of why the docker-compose-stack-live file was modified, and were you able to resolve the issue by reverting it to the original value?

For the actual error Kibana server is not ready yet, here's what I did to workaround the issue when I had it pop up:
in /etc/elasticsearch/elasticsearch.yml
the value network.host
and
in /etc/kibana/kibana.yml
the value server.host
need to match. Specifically in my install 0.0.0.0 is set for the IP address for both attributes.

It sounds like this issue has been resolved despite it being unclear what the initial problem was.