When I am trying to update the deployment by uploading an updated deployment.yaml, there is an error showing deployments.apps {my-app-name} already exists.
I know that I can update the image version from deployment, but I want to do all the work using yaml, so that I can keep track of what I am doing.
Thanks
p.s. I do not have console access to that machine, only the dashboard web interface.
please try following on command line :
kubectl apply -f deployment.yaml -n <namespace name>
To do this from dashboard by uploading the yaml file -
delete your existing deployment and upload the modified file. While your deployment is running, you cannot upload the file for the same deployment again.
Didn't find the other way as to update deployment through the Dashboard WEB UI submenu: Deployments > View/edit YAML.Seems that POST request within https://Web_ui_dashboard_IP/api/v1/appdeploymentfromfile does not support deployment modification.
Related
TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml
I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml
I have a yaml file which I can use to create pods. I am using the dashboard so I can simply select yaml file and it will create pods. Pod will start the container and container will run the docker image. So now lets say I have done some changes in the docker image and want to deploy it again. For this, I will delete the already running pod and will upload the yaml file.
Instead of deleting and uploading yaml file again, is there any keyword available which will delete the already running pod/deployment and will recreate it.
Thanks
If you are using this for development you might get away with
containers:
- image: my/app:dev
imagePullPolicy: Always
With this, whenever your pod is recreated, you will get fresh image version.
That said, you need to use something like Deployment to have a pod restarted automaticaly, and then you can just kubectl delete my-pod-xxxxx-yyy to wipe old one and in few sec get the fresh, current one.
For prod, don't do that please. Just use tagged images and apply changed image to your Deployment with kubectl apply -f my.yaml or preferably something like Helm (but that is more complicated topic for starters)
I can't remember the StackOverflow question where I first saw this method, but here it is again:
kubectl --namespace thenamespace get pod thepod -o yaml | kubectl replace --save-config -f -
You can do that with all k8s resources.
I created a .yaml file following this tutorial. You deploy the web service with kubectl apply -f shopfront-service.yaml. So far so good. The author says nothing though about how to tear everything down.
With TerraForm or CloudFormation you use the same .yaml file to remove all resources. I would think that K8 would also support cleaning up using the same .yaml file, but I can't find any way to do this.
Is there a way to delete resources with the same .yaml file used to create the deployment?
kubectl delete -f shopfront-service.yaml
see kubectl delete docs
Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.