Example guestbook all-in-one fails
hect1995 opened this issue · comments
I am deploying the guestbook all-in-one
example into Openshift 4 as:
argocd app create guestbook --repo https://github.com/kubernetes/examples.git --path guestbook/all-in-one --dest-server https://kubernetes.default.svc --dest-namespace argocd
argocd app sync guestbook
$ oc get pods
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 150m
argocd-dex-server-5dd657bd9-65lg7 1/1 Running 2 150m
argocd-operator-df9b47968-8xshg 1/1 Running 0 83m
argocd-redis-759b6bc7f4-4749g 1/1 Running 0 150m
argocd-repo-server-6c495f858f-qp9l5 1/1 Running 0 150m
argocd-server-859b4b5578-s8n29 1/1 Running 0 150m
frontend-85595f5bf9-2hcst 0/1 CrashLoopBackOff 7 12m
frontend-85595f5bf9-gh4ss 0/1 CrashLoopBackOff 7 12m
frontend-85595f5bf9-tpbt9 0/1 CrashLoopBackOff 7 12m
redis-follower-dddfbdcc9-lz8v5 1/1 Running 0 12m
redis-follower-dddfbdcc9-vnp9n 1/1 Running 0 12m
redis-leader-fb76b4755-6nn2n 1/1 Running 0 12m
In all nodes from frontend
I get:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.129.2.226. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.