portainer / agent

The Portainer agent

Home Page:https://www.portainer.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Permament crashing with error invalid memory address or nil pointer dereference on getSwarmConfiguration

shafiev opened this issue · comments

commented

Hello, cannot start a portainer agent . It's permamently giver this error .Can anybody suggest a debug steps or smth like it ?


panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x815527]

goroutine 1 [running]:
github.com/portainer/agent/docker.getSwarmConfiguration(0xc000344000, 0xc0001f1300, 0x3b, 0x28, 0x1d, 0x0, 0xb, 0x15, 0xc000342850, 0x8, ...)
	/home/vsts/go/src/github.com/portainer/agent/docker/docker.go:132 +0xf7
github.com/portainer/agent/docker.(*InfoService).GetRuntimeConfigurationFromDockerEngine(0x241c618, 0x0, 0x0, 0x0)
	/home/vsts/go/src/github.com/portainer/agent/docker/docker.go:49 +0x23b
main.main()
	/home/vsts/go/src/github.com/portainer/agent/cmd/agent/main.go:53 +0xf6e

System info
lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 9.12 (stretch) Release: 9.12 Codename: stretch

docker info

Client:
 Debug Mode: false

Server:
 Containers: 39
  Running: 28
  Paused: 0
  Stopped: 11
 Images: 21
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: a7xqwh4c880jbn8ajfsx320uk
  Is Manager: true
  ClusterID: i09r8exlzguoe3efn9tcetbpg
  Managers: 0
  Nodes: 1
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 3
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 172.16.14.29
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.9.0-14-amd64
 Operating System: Debian GNU/Linux 9 (stretch)
 OSType: linux
 Architecture: x86_64
 CPUs: 32
 Total Memory: 123.2GiB
 Name: tc-prod-node-01
 ID: SOWA:CYSJ:FXSP:LQ3V:D25Z:SACM:3MLZ:LKIS:M6HG:KTLA:MNSE:YCZN
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: fltxserviceaccount
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Hi @shafiev

Which image/version of the Portainer agent are you using?

Is that the only thing you see in the logs? Can you share the logs of the agent with debug mode enabled via -e LOG_LEVEL=debug ?

commented

I am using latest image

Digest: sha256:397d3dea42d1bfceebe9ab481ef74e00e833963dcd7b301e2af60516d338e885
Status: Downloaded newer image for portainer/agent:latest

Logs with debug log level and full command line(private infromation replaced with XXXX)

 docker run  -v /var/run/docker.sock:/var/run/docker.sock   -v /var/lib/docker/volumes:/var/lib/docker/volumes   -v /:/host   --restart always   -e EDGE=1   -e EDGE_ID=XXXX   -e EDGE_KEY=XXXX   -e CAP_HOST_MANAGEMENT=1   -v portainer_agent_data:/data   --name portainer_edge_agent -e LOG_LEVEL=DEBUG   portainer/agent:latest 
Unable to find image 'portainer/agent:latest' locally
latest: Pulling from portainer/agent
Digest: sha256:397d3dea42d1bfceebe9ab481ef74e00e833963dcd7b301e2af60516d338e885
Status: Downloaded newer image for portainer/agent:latest
2021/02/25 07:50:39 [WARN] [os,options] [message: the CAP_HOST_MANAGEMENT environment variable is deprecated and will likely be removed in a future version of Portainer agent]
2021/02/25 07:50:39 [INFO] [main] [message: Agent running on Docker platform]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x815527]

goroutine 1 [running]:
github.com/portainer/agent/docker.getSwarmConfiguration(0xc000332550, 0xc0003e8400, 0x3b, 0x22, 0x1d, 0x0, 0x5, 0x11, 0xc000370a40, 0x8, ...)
	/home/vsts/go/src/github.com/portainer/agent/docker/docker.go:132 +0xf7
github.com/portainer/agent/docker.(*InfoService).GetRuntimeConfigurationFromDockerEngine(0x241c618, 0x0, 0x0, 0x0)
	/home/vsts/go/src/github.com/portainer/agent/docker/docker.go:49 +0x23b
main.main()
	/home/vsts/go/src/github.com/portainer/agent/cmd/agent/main.go:53 +0xf6e

Thanks for the update, added it in our backlog for investigation.

From my understanding it could be related to an invalid manager state in your cluster.

Actually looking at the output of your docker info command, I believe that there is something wrong in your cluster configuration:

 Swarm: active
  NodeID: a7xqwh4c880jbn8ajfsx320uk
  Is Manager: true
  ClusterID: i09r8exlzguoe3efn9tcetbpg
  Managers: 0
  Nodes: 1

Looks like this node is not considered as a Manager node inside the cluster.

commented

Actually looking at the output of your docker info command, I believe that there is something wrong in your cluster configuration:

 Swarm: active
  NodeID: a7xqwh4c880jbn8ajfsx320uk
  Is Manager: true
  ClusterID: i09r8exlzguoe3efn9tcetbpg
  Managers: 0
  Nodes: 1

Looks like this node is not considered as a Manager node inside the cluster.

Thank you.Can you suggest steps for how to fix it ?Preferably ,without downtime of running docker containers ?

@shafiev I'd recommend looking for help in the Docker community for this problem at this is out of the scope of the Portainer team.

I'll close this issue as I think we've isolated the problem with the platform, if you manage to solve this problem and you still have an issue with the agent feel free to comment in there and we'll re-open it.