FAILED cassandra-operator-1.0.0 - kubernetes

I am getting the below error while setting up cassandra-operator.
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
r8e789c134d 1 Fri Jun 26 07:00:39 2020 FAILED cassandra-operator-1.0.0 1.0.1 apiconnect
$ helm status r8e789c134d
LAST DEPLOYED: Fri Jun 26 07:00:39 2020
NAMESPACE: apiconnect
STATUS: FAILED
NOTES:
1. Get the cassandra-cluster admin URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace apiconnect -l "app=cassandra-operator,release=r8e789c134d" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:1770 to use your application"
kubectl port-forward $POD_NAME 1770:1770
$ export POD_NAME=$(kubectl get pods --namespace apiconnect -l "app=cassandra-operator,release=r8e789c134d" -o jsonpath="{.items[0].metadata.name}")
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
Please let me know if there is any issue in the set up.
Thanks in advance.

Related

Unauthorize to run apply Kubernetes via gitLab-runner

I have set up deployment staage in .gitlab-ci.yml as here
deploy_internal_dev:
stage: deploy_internal
only:
- master
image: image
environment:
name: Dev
url: url
script:
- pwd
- whoami
- kubectl config set-context --current --namespace=xxx
- kubectl delete -f kube/kube-dev/deployment.yml --now --timeout=100s || { echo "gracefull delete failed" ; kubectl delete -f kube/kube-dev/deployment.yml --grace-period=0 --force ; } || true
- kubectl apply -f kube/kube-dev
tags:
- development
dependencies: [ ]
This worked well OK previously, but yesterday. The runner is no longer authorized to apply these commands.it says
Executing "step_script" stage of the job script
Using docker image sha256:88fd9345c2d8e3a95a9b1f792c3f330e7e529b7c217ee1d607ef9cb2a62288ca for docker.xxxx.net/xxx/kubectl-dev:1.0.0 with digest docker.xxx.net/xxxx/kubectl-dev#sha256:73548cd419ff37db648cb88285c4fc6dc1b3c9ab1addc7a050b2866e5f51bb78 ...
$ pwd
/builds/xxx-ckdu/xxx-api
$ whoami
root
$ kubectl config set-context --current --namespace=xxx
Context "kubernetes-admin#kubernetes" modified.
$ kubectl delete -f kube/kube-dev/deployment.yml --now --timeout=100s || { echo "gracefull delete failed" ; kubectl delete -f kube/kube-dev/deployment.yml --grace-period=0 --force ; } || true
error: unable to recognize "kube/kube-dev/deployment.yml": Unauthorized
gracefull delete failed
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: unable to recognize "kube/kube-dev/deployment.yml": Unauthorized
$ kubectl apply -f kube/kube-dev
error: You must be logged in to the server (the server has asked for the client to provide credentials)
ERROR: Job failed: exit code 1
but I could run these commands and apply k8s config via server access ( ssh ). I don't know what I'm missing here.

Get helm rollback image version

I'm using a C# code to run the command helm rollback <ImageName> to rollback to the previous version (by creating new Proccess() with helm).
Is there a way to know the tag of the image the command rolled back to?
Environment
Small cluster with 2 helm charts deployed:
ingress-nginx
traefik
helm v3.7.0 is used.
Also yq was installed to work with output in yaml in the similar way jq works with json.
Rollback logic
If no revision is set, rollback will be performed on previous revision. If rollback is run again without revision, previous one again will be used.
$ helm history traefik
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Tue Oct 12 11:28:22 2021 superseded traefik-10.3.5 2.5.3 Install complete
2 Tue Oct 12 11:42:47 2021 superseded traefik-10.3.6 2.5.3 Upgrade complete
3 Tue Oct 12 11:44:32 2021 superseded traefik-10.3.2 2.5.1 Upgrade complete
4 Tue Oct 12 12:03:26 2021 superseded traefik-10.3.6 2.5.3 Rollback to 2
5 Tue Oct 12 13:26:02 2021 deployed traefik-10.3.2 2.5.1 Rollback to 3
6 Tue Oct 12 13:26:53 2021 deployed traefik-10.3.6 2.5.3 Rollback to 4
So to figure out details of the rolled back revision, we can use current revision.
The same way it can be used to find details from other revisions, flag --revision XX will be used, for example
$ helm get manifest traefik --revision 3
Answer
After some research I found at least 3 options where this information can be retrieved:
From manifest which was applied (most precise approach):
$ helm get manifest ingress-nginx -n ingress-nginx | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
v1.0.0
$ helm get manifest traefik | yq eval '.spec.template.spec.containers[].image' - | grep -oE '[v]?[0-9]\.[0-9]\.[0-9]'
2.5.3
Using yq in this example, because helm manifest provides output only in yaml:
$ helm get manifest --help
This command fetches the generated manifest for a given release.
A manifest is a YAML-encoded representation of the Kubernetes
resources that were generated from this release's chart(s). If a chart
is dependent on other charts, those resources will also be included in
the manifest.
From values (not always works, depends on a chart and/or if image details are located in values.yaml or were set using --set flag):
$ helm get values ingress-nginx --all -n ingress-nginx -o json | jq '.controller.image.tag'
"v1.0.0"
$ helm get values traefik --all -o json | jq '.controller.image.tag'
null
From kubernetes secrets (most difficult):
All revisions are stored as secrets in the same namespace where chart is deployed, for instance:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.traefik.v1 helm.sh/release.v1 1 134m
sh.helm.release.v1.traefik.v2 helm.sh/release.v1 1 119m
sh.helm.release.v1.traefik.v3 helm.sh/release.v1 1 118m
sh.helm.release.v1.traefik.v4 helm.sh/release.v1 1 99m
sh.helm.release.v1.traefik.v5 helm.sh/release.v1 1 16m
sh.helm.release.v1.traefik.v6 helm.sh/release.v1 1 15m
The way the data is stored in these secrets is even more complicated, however image tag can be retrieved from it as well. Provided link contains details on how to extract data from these secrets.
Here's a quote of command to decode one of the secrets:
kubectl get secrets sh.helm.release.v1.wordpress.v1 -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d
Useful links:
helm get manifest
helm get values

gitlab + GKE + AutoDevops auto-deploy deploy fail. error: arguments in resource/name form must have a single resource and name. How to find a mistake?

I am new to gitlab CI. So I am trying to use https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml, to deploy simple test django app to the kubernetes cluster attached to my gitlab project using a custom chat https://gitlab.com/aidamir/citest/tree/master/chart. All things goes well, but the last moment it show error message from kubectl and it fails. here is output of the pipeline:
Running with gitlab-runner 12.2.0 (a987417a)
on docker-auto-scale 72989761
Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ...
Running on runner-72989761-project-13952749-concurrent-0 via runner-72989761-srm-1568200144-ab3eb4d8...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/myporject/kubetest/.git/
Created fresh repository.
From https://gitlab.com/myproject/kubetest
* [new branch] master -> origin/master
Checking out 3efeaf21 as master...
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ auto-deploy check_kube_domain
$ auto-deploy download_chart
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
"gitlab" has been added to your repositories
No requirements found in /builds/myproject/kubetest/chart/charts.
No requirements found in chart//charts.
$ auto-deploy ensure_namespace
NAME STATUS AGE
kubetest-13952749-production Active 46h
$ auto-deploy initialize_tiller
Checking Tiller...
Tiller is listening on localhost:44134
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
[debug] SERVER: "localhost:44134"
Kubernetes: &version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
$ auto-deploy create_secret
Create secret...
secret "gitlab-registry" deleted
secret/gitlab-registry replaced
$ auto-deploy deploy
secret "production-secret" deleted
secret/production-secret replaced
Deploying new release...
Release "production" has been upgraded.
LAST DEPLOYED: Wed Sep 11 11:12:21 2019
NAMESPACE: kubetest-13952749-production
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
production-djtest 1/1 1 1 46h
==> v1/Job
NAME COMPLETIONS DURATION AGE
djtest-update-static-auik5 0/1 3s 3s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-storage-pvc Bound nfs 10Gi RWX 3s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
djtest-update-static-auik5-zxd6m 0/1 ContainerCreating 0 3s
production-djtest-5bf5665c4f-n5g78 1/1 Running 0 46h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-djtest ClusterIP 10.0.0.146 <none> 5000/TCP 46h
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kubetest-13952749-production -l "app.kubernetes.io/name=djtest,app.kubernetes.io/instance=production" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
error: arguments in resource/name form must have a single resource and name
ERROR: Job failed: exit code 1
Please help me to find the reason of the error message.
I did look to the auto-deploy script from the image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0. There is a settings variable to disable rollout status check
if [[ -z "$ROLLOUT_STATUS_DISABLED" ]]; then
kubectl rollout status -n "$KUBE_NAMESPACE" -w "$ROLLOUT_RESOURCE_TYPE/$name"
fi
So setting
variables:
ROLLOUT_STATUS_DISABLED: "true"
prevents job fail. But I still have no answer why the script does not work with my custom chat?. When I do execution of the status checking command from my laptop it shows nothing errors.
kubectl rollout status -n kubetest-13952749-production -w "deployment/production-djtest"
deployment "production-djtest" successfully rolled out
I also found a complaint to a similar issue
https://gitlab.com/gitlab-com/support-forum/issues/4737, but there is no activity on the post.
It is my gitlab-ci.yaml:
image: alpine:latest
variables:
POSTGRES_ENABLED: "false"
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- test
- deploy # dummy stage to follow the template guidelines
- review
- dast
- staging
- canary
- production
- incremental rollout 10%
- incremental rollout 25%
- incremental rollout 50%
- incremental rollout 100%
- performance
- cleanup
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
variables:
CI_APPLICATION_REPOSITORY: eu.gcr.io/myproject/django-test
error: arguments in resource/name form must have a single resource and name
That issue you linked to has Closed (moved) in its status because it was moved from issue 66016, which has what I believe is the real answer:
Please try adding the following to your .gitlab-ci.yml:
variables:
ROLLOUT_RESOURCE_TYPE: deployment
Using just the Jobs/Deploy.gitlab-ci.yml omits the variables: block from Auto-DevOps.gitlab-ci.yml which correctly sets that variable
In your case, I think you just need to move that variables: up to the top, since (afaik) one cannot have two top-level variables: blocks. I'm actually genuinely surprised your .gitlab-ci.yml passed validation
Separately, if you haven't yet seen, you can set the TRACE variable to switch auto-deploy into set -x mode which is super, super helpful in seeing exactly what it is trying to do. I believe your command was trying to run rollout status /whatever-name and with just a slash, it doesn't know what kind of name that is.
I was facing this error in different context. There shouldn't be spaces when you're passing multiple resource type.
kubectl get deploy, rs, po -l app=mynginx # wrong
kubectl get deploy,rs,po -l app=mynginx # right

Get Deployment annotation from a Kubernetes Pod

Each Kubernetes deployment gets this annotation:
$ kubectl describe deployment/myapp
Name: myapp
Namespace: default
CreationTimestamp: Sat, 24 Mar 2018 23:27:42 +0100
Labels: app=myapp
Annotations: deployment.kubernetes.io/revision=5
Is there a way to read that annotation (deployment.kubernetes.io/revision) from a pod that belongs to the deployment?
I tried Downward API, but that only allows to get annotations of the pod itself (not of its deployment).
kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}'
It has been a long time but here is what I do to get a specific annotation :
kubectl get ing test -o jsonpath='{.metadata.annotations.kubernetes\.io/ingress\.class}'
So for you it would be :
kubectl get deploy myapp -o jsonpath='{.metadata.annotations.deployment\.kubernetes\.io/revision}'
I hope it helps.
As an alternative, you can leverage both using a kubectl selector to query all pods with label app=myapp and jq to query and format the resulting json to get you the name and annotations for each of the pods
kubectl get po -l app=myapp -o=json | jq '[. | {pod: .items[].metadata}][] | {name: .pod.name, annotations: .pod.annotations}'
Yes you can get the annotation from a pod using below command:
kubectl describe pod your_podname
and you will find Annotations section with all annotation for pod.
Yes. Its possible by the below command-
kubectl get pod myapp -n=default -o yaml | grep -A 8 annotations:
kubectl get pod myapp -n=default -o yaml gets all the details of the pod myapp in default namespace in yaml format.
grep -A 8 metadata: searches for keyword 'annotations' and displays 8 lines as specified by A 8 to show all the annotations
to get only the annotations section of the pod you can use
kubectl get pod YOUR_POD_NAME | get -i 'annotations'
you can also use jsonPath like
kubectl get pod YOUR_POD_NAME -o jsonpath='{.metadata.annotations}{"\n"}'

kubernetes rolling update using helm

I am new to Helm. I have installed Minikube & Helm on my windows system. I am able create pods using Helm and see deployment,pods & replicaset in dashboard.
I want to do rolling update using Helm. Guide me how to do rolling update in K8s using Helm.
Creating Tomcat pod using Helm
helm create hello-world
Changed image name and deployment name in deployment.yaml
kind: Deployment
metadata:
name: mytomcat
spec:
containers:
- name: {{ .Chart.Name }}
image: tomcat
Install
helm install hello-world
NAME: whopping-dolphin
LAST DEPLOYED: Wed Aug 30 21:38:42 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whopping-dolphin-hello-world 10.0.0.178 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mytomcat 1 1 1 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=hello-world,release=whopping-dolphin" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
I see mytomcat deployment and pod mytomcat-2768693561-hd2hd in dashboard.
Now I would like to give command which will delete my current deployment & pod in k8s and it should create new deployment and pod.
It will be helpful if I get sample commands and yaml.
Below command is working fine for Rolling update.
First time it will be install
next time it will be upgrade
helm upgrade --install tom-release --set appName=mytomcatcon
hello-world
tom-release is my release name and passing runtime values to helm chart using --set option