ArgoCD: How restart multiple Apps at once - kubernetes

My Database was down today and multiple applications in my cluster lost the connection with the database, but the Pod was Healthy (I know, I should have better Health Checks, but I don't). So, when the database was back online, the applications weren't able to connect again. So, I would like to restart all my Deployments. They share a label and are in the same Kubernetes namespace. We use ArgoCD to manage the applications.
In ArgoCD, I know I can restart all Deployments in one ArgoCD Application by typing this command:
argocd app actions run my-app restart --kind Deployment --all
https://github.com/argoproj/argo-cd/blob/master/docs/user-guide/commands/argocd_app_actions_run.md
But I do not know how to restart the Deployment of multiple independent applications in ArgoCD. I tried, but none of these work:
argocd app actions run my-app1 my-app2 restart --kind Deployment --all
argocd app actions run -l mylabel=value restart --kind Deployment --all
I wonder how can I restart multiple applications in ArgoCD in one command?
I would like to use same syntax of the sync command (https://github.com/argoproj/argo-cd/blob/master/docs/user-guide/commands/argocd_app_sync.md):
argocd app sync [APPNAME... | -l selector] [flags]
I tried to use sync, but it does not restart Deployments, unless I make some change in the Deployment itself (or if I use a configMap Generator, which is not my case).
Thank you in advance.
EDIT: I created a shell script for my needs:
for i in $(argocd app list -l yourgroup=your.label --output name); do
argocd app actions run $i restart --kind Deployment --all;
done
It will fail for those apps which does not have a Deployment, but, for me, restarted everything I wanted.

Related

Stopping all pods in Kubernetes cluster before running database migration job

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.
The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.
So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.
There are two ways I can see that you can do this.
First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)
kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....
The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

Update kubernetes secrets doesn't update running container env vars

Currenly when updating a kubernetes secrets file, in order to apply the changes, I need to run kubectl apply -f my-secrets.yaml. If there was a running container, it would still be using the old secrets. In order to apply the new secrets on the running container, I currently run the command kubectl replace -f my-pod.yaml .
I was wondering if this is the best way to update a running container secret, or am I missing something.
Thanks.
For k8s' versions >v1.15: kubectl rollout restart deployment $deploymentname: this will
restart pods incrementally without causing downtime.
The secret docs for users say this:
Mounted Secrets are updated automatically
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. The update time depends on the kubelet syncing period.
Mounted secrets are updated. The question is when. In case a the content of a secret is updated does not mean that your application automatically consumes it. It is the job of your application to watch file changes in this scenario to act accordingly. Having this in mind you currently need to do a little bit more work. One way I have in mind right now would be to run a scheduled job in Kubernetes which talks to the Kubernetes API to initiate a new rollout of your deployment. That way you could theoretically achieve what you want to renew your secrets. It is somehow not elegant, but this is the only way I have in mind at the moment. I still need to check more on the Kubernetes concepts myself. So please bear with me.
Assuming we have running pod mypod [mounted secret as mysecret in pod spec]
We can delete the existing secret
kubectl delete secret mysecret
recreate the same secret with updated file
kubectl create secret mysecret <updated file/s>
then do
kubectl apply -f ./mypod.yaml
check the secrets inside mypod, it will be updated.
In case anyone (like me) want to force rolling update pods which are using those secrets. From this issue, the trick is to update an Env variable inside the container, then k8s will automatically rolling update entire pods
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
By design, Kubernetes won't push Secret updates to running Pods. If you want to update the Secret value for a Pod, you have to destroy and recreate the Pod. You can read more about it here.

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.