Kraken pod scenario is not working for me
sinamas1 opened this issue · comments
I'm currently testing the Kraken tool with the Minishift which is installed on the virtual machine. I was able to test application chaos successfully and can see network policy getting created. That scenario worked perfectly.
After that, I tried to test the Pod scenario where it tries to kill the pod but it looks like the Kraken tool is not able to find the resource.
I'm not able to figure out the issue. Any help would be greatly appreciated.
2022-06-22 02:21:18 ERROR k8s_client (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'Date': 'Wed, 22 Jun 2022 02:21:18 GMT', 'Content-Length': '174'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
Traceback (most recent call last):
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/k8s/k8s_client.py", line 231, in get_scenarios
crds = self.client_extensionsApi.list_custom_resource_definition().to_dict()['items']
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api/apiextensions_v1_api.py", line 613, in list_custom_resource_definition
return self.list_custom_resource_definition_with_http_info(**kwargs) # noqa: E501
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api/apiextensions_v1_api.py", line 716, in list_custom_resource_definition_with_http_info
return self.api_client.call_api(
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'Date': 'Wed, 22 Jun 2022 02:21:18 GMT', 'Content-Length': '174'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
Traceback (most recent call last):
File "/root/krkn/chaos/bin/powerfulseal", line 8, in <module>
sys.exit(start())
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/cli/__main__.py", line 656, in start
main(sys.argv[1:])
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/cli/__main__.py", line 625, in main
success = runner.run(
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/policy/policy_runner.py", line 86, in run
policy = self.read_policy()
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/policy/policy_runner.py", line 52, in read_policy
scenarios = self.k8s_client.get_scenarios()
File "/root/krkn/chaos/lib/python3.8/site-packages/powerfulseal/k8s/k8s_client.py", line 231, in get_scenarios
crds = self.client_extensionsApi.list_custom_resource_definition().to_dict()['items']
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api/apiextensions_v1_api.py", line 613, in list_custom_resource_definition
return self.list_custom_resource_definition_with_http_info(**kwargs) # noqa: E501
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api/apiextensions_v1_api.py", line 716, in list_custom_resource_definition_with_http_info
return self.api_client.call_api(
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/root/krkn/chaos/lib/python3.8/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'Date': 'Wed, 22 Jun 2022 02:21:18 GMT', 'Content-Length': '174'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
2022-06-22 02:21:19,179 [ERROR] Failed to run powerfulseal autonomous --use-pod-delete-instead-of-ssh-kill --policy-file scenarios/openshift/etcd.yml --kubeconfig /root/.kube/config --no-cloud --inventory-kubernetes --headless, error: Command 'powerfulseal autonomous --use-pod-delete-instead-of-ssh-kill --policy-file scenarios/openshift/etcd.yml --kubeconfig /root/.kube/config --no-cloud --inventory-kubernetes --headless' returned non-zero exit status 1.
Also, Tried injecting chaos using docker images as per the instructions below. But, that is not working as well and failing with the below error.
Instructions for injecting chaos using docker image: https://github.com/chaos-kubox/krkn-hub/blob/main/docs/pod-scenarios.md
root@ocpvm1:~# docker logs krknPodChaos
+ source /root/main_env.sh
++ export CERBERUS_ENABLED=False
++ CERBERUS_ENABLED=False
++ export CERBERUS_URL=http://0.0.0.0:8080
++ CERBERUS_URL=http://0.0.0.0:8080
++ export WAIT_DURATION=60
++ WAIT_DURATION=60
++ export ITERATIONS=1
++ ITERATIONS=1
++ export DAEMON_MODE=False
++ DAEMON_MODE=False
++ export RETRY_WAIT=120
++ RETRY_WAIT=120
++ export PUBLISH_KRAKEN_STATUS=False
++ PUBLISH_KRAKEN_STATUS=False
++ export PORT=8081
++ PORT=8081
++ export LITMUS_VERSION=v1.13.8
++ LITMUS_VERSION=v1.13.8
++ export SIGNAL_STATE=RUN
++ SIGNAL_STATE=RUN
++ export DEPLOY_DASHBOARDS=False
++ DEPLOY_DASHBOARDS=False
++ export CAPTURE_METRICS=False
++ CAPTURE_METRICS=False
++ export ENABLE_ALERTS=False
++ ENABLE_ALERTS=False
++ export ES_SERVER=http://0.0.0.0:9200
++ ES_SERVER=http://0.0.0.0:9200
+ source /root/env.sh
++ export RUNS=1
++ RUNS=1
++ export SECONDS_BETWEEN_RUNS=30
++ SECONDS_BETWEEN_RUNS=30
++ export NAMESPACE=my-project
++ NAMESPACE=my-project
++ export POD_LABEL=app=django-ex
++ POD_LABEL=app=django-ex
++ export DISRUPTION_COUNT=1
++ DISRUPTION_COUNT=1
++ export EXPECTED_POD_COUNT=3
++ EXPECTED_POD_COUNT=3
++ export TIMEOUT=180
++ TIMEOUT=180
++ export SCENARIO_TYPE=pod_scenarios
++ SCENARIO_TYPE=pod_scenarios
++ export 'SCENARIO_FILE=- scenarios/pod_scenario.yaml'
++ SCENARIO_FILE='- scenarios/pod_scenario.yaml'
++ export SCENARIO_POST_ACTION=
++ SCENARIO_POST_ACTION=
+ ls -la /root/.kube
+ source /root/common_run.sh
total 20
drwxr-xr-x 1 root root 4096 Jun 22 03:52 .
dr-xr-x--- 1 root root 4096 Jun 19 22:55 ..
-rw------- 1 root root 6698 Jun 21 22:18 config
+ checks
+ check_oc
+ log 'Checking if OpenShift client is installed'
++ date +%d-%m-%YT%H:%M:%S
22-06-2022T03:52:30 Checking if OpenShift client is installed
+ echo -e '\033[1m22-06-2022T03:52:30 Checking if OpenShift client is installed\033[0m'
+ which oc
+ [[ 0 != 0 ]]
+ check_kubectl
+ log 'Checking if kubernetes client is installed'
++ date +%d-%m-%YT%H:%M:%S
22-06-2022T03:52:30 Checking if kubernetes client is installed
+ echo -e '\033[1m22-06-2022T03:52:30 Checking if kubernetes client is installed\033[0m'
+ which kubectl
+ [[ 0 != 0 ]]
+ check_cluster_version
+ kubectl get clusterversion
error: the server doesn't have a resource type "clusterversion"
@sinamas1 sorry for the delayed response, we updated the pod scenarios code base to use native support as well as run on Kubernetes. Please feel free to let us know if the issue still persists. Thanks.