One of the points in the kubectl best practices section in Kubernetes Docs state below:
Pin to a specific generator version, such as kubectl run
--generator=deployment/v1beta1
But then a little down in the doc, we get to learn that except for Pod, the use of --generator option is deprecated and that it would be removed in future versions.
Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(
kubectl create is the recommended alternative if you want to use more than just a pod (like deployment).
https://kubernetes.io/docs/reference/kubectl/conventions/#generators says:
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
This pull request has the reason why generators (except run-pod/v1) were deprecated:
The direction is that we want to move away from kubectl run because it's over bloated and complicated for both users and developers. We want to mimic docker run with kubectl run so that it only creates a pod, and if you're interested in other resources kubectl create is the intended replacement.
For deployment you can try
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
and
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
Related
I am trying to run a pod where the default limit of my GKE AutoPilot Cluster is 500m vCPU.. But I want to run all my pods by 250m vCPU only. I tried the command kubectl run pod-0 --requests="cpu=150m" --restart=Never --image=example/only
But I get a warning: Flag --requests has been deprecated, has no effect and will be removed in the future. Then when I describe my pod, it sets 500m. I would like to know option to set resource limits with just plainly kubectl run pod
Since kubectl v1.21 All generators are depricated.
Github issue: Remove generators from kubectl commands quote:
kubectl command are going to get rid of the dependency to the
generators, and ultimately remove them entirely. The main goal is to
simplyfy the use of kubectl command. This is already done in kubectl
run, see #87077 and #68132.
So it looks like --limits and --requests flags won't longer be available in the future.
Here is the PR that did it: Drop deprecated run flags and deprecate unused ones.
For the debug and testing purposes I'd like to find a most convenient way launching Kubernetes pods and altering its specification on-the-fly.
The launching part is quite easy with imperative commands.
Running
kubectl run nginx-test --image nginx --restart=Never
gives me exactly what I want: the single pod not managed by any controller like Deployment or ReplicaSet. Easy to play with and cleanup when it needed.
However when I'm trying to edit the spec with
kubectl edit po nginx-test
I'm getting the following warning:
pods "nginx-test" was not valid:
* spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
i.e. only the limited set of Pod spec is editable at runtime.
OPTIONS FOUND SO FAR:
Getting Pod spec saved into the file:
kubectl get po nginx-test -oyaml > nginx-test.yaml
edited and recreated with
kubectl apply -f
A bit heavy weight for changing just one field though.
Creating a Deployment not single Pod and then editing spec section in Deployment itself.
The cons are:
additional API object needed (Deployment) which you should not forget to cleanup when you are done
the Pod names are autogenerated in the form of nginx-test-xxxxxxxxx-xxxx and less
convenient to work with.
So is there any simpler option (or possibly some elegant workaround) of editing arbitrary field in the Pod spec?
I would appreciate any suggestion.
You should absolutely use a Deployment here.
For the use case you're describing, most of the interesting fields on a Pod cannot be updated, so you need to manually delete and recreate the pod yourself. A Deployment manages that for you. If a Deployment owns a Pod, and you delete the Deployment, Kubernetes knows on its own to delete the matching Pod, so there's not really any more work.
(There's not really any reason to want a bare pod; you almost always want one of the higher-level controllers. The one exception I can think of is kubectl run a debugging shell inside the cluster.)
The Pod name being generated can be a minor hassle. One trick that's useful here: as of reasonably recent kubectl, you can give the deployment name to commands like kubectl logs
kubectl logs deployment/nginx-test
There are also various "dashboard" type tools out there that will let you browse your current set of pods, so you can do things like read logs without having to copy-and-paste the full pod name. You may also be able to set up tab completion for kubectl, and type
kubectl logs nginx-test<TAB>
I saw generators except run-pod/v1 are deprecated in kubectl run. Previously, it was convenient to create deployments without creating a YAML file for testing purposes, wondering why it is deprecated?
From the Changelog:
All kubectl run generators have been deprecated except for run-pod/v1. This is part of a move to make kubectl run simpler, enabling it create only pods; if additional resources are needed, you should use kubectl create instead. (#68132, #soltysh)
I have a namespace namespace - which has ~10-15 deployments.
Creating a big yaml file, and apply it on a "deploy".
How do i validate, wait, watch, block, until all deployments have been rolledout ?
currently i am thinking of:
get list of deployments
foreach deployment - make api call to get status
once all deployments are "green" - end process, signaling deployment/ship is done.
what are the status'es of deployments, is there already a similar tool that can do it? https://github.com/Shopify/kubernetes-deploy is kind of what i am searching for, but it forces a yml structure and so on.
what would be the best approach?
Set a readiness probe and use kubectl rollout status deployment <deployment_name> to see the deployment rollout status
You'd better use Helm for managing deployments. Helm allows you to create reusable templates that can be applied to more than one environment. Read more here: https://helm.sh/docs/chart_template_guide/#getting-started-with-a-chart-template
You can create one big chart for all your services or you can create separate Helm charts for each your service.
Helm also allows you to run tests after deployment is done. Read more here: https://helm.sh/docs/developing_charts/#a-breakdown-of-the-helm-test-hooks
You probably want to use kubectl wait https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait
It lets you wait for a specific condition of a specific object
In your case:
kubectl -n namespace \
wait --for=condition=Available --timeout=32s \
deployment/name
use --dry-run option in the apply/create command to check the syntax.
I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:
kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
This seems to do several things behind the scenes, and I can view the manifest files created using kubectl edit deployment, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like kubectl create -f or kubectl apply -f
Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?
Would I then have to be creating Service, ReplicationController, and Pod specs myself?
Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?
The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job.
Using Generators (Run, Expose)
Using Imperative way (Create)
Using Declarative way (Apply)
All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use Generators .
If you want to version control the k8s object then it's better to use declarative way which helps us to determine the accuracy of data in k8s objects.
Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.
Pods: It makes sure that related containers are together and provide efficiency.
ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods
Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version
Lastly, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad.
There is a little more nuance to the difference between apply and create than what is already mentioned here. Kubectl create can be used imperatively on the command line or declaratively against a manifest file.
Kubectl apply is used declaratively against a manifest file. You can't use kubectl apply imperatively.
One key difference is when you already have an object and you want to update something. Even if you used a manifest file with kubectl create, you will get an error when you use kubectl create again to update the same resource. But, if you use kubectl apply, you will not get an error. It will update the resource without any issues.
So, the convention is to use kubectl apply to create AND update resources, kubectl create is used to create resources, and kubectl run is used to create a pod with a specific image, namespace, etc. for experimentation and testing with the --dry-run=client option.