difference in syntax when doing kuberentes deployments related operations - kubernetes

What is the difference between the following syntax usage:
kubectl get deployments
kubectl get deployment.apps
kubectl get deployment.v1.apps
There are references to deployment.v1.apps and deployment.apps in the documentation specially when talking about rollouts and upgrades.
For example:
To see the Deployment rollout status, run kubectl rollout status deployment.v1.apps/nginx-deployment
For example:
Let's update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

There is no difference. They show you in examples different ways to access the resource.
This is the reference to app/v1 api that you can see in example nginx deployment:
apiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
You can use shorten way like kubectl get deployments either longer ones you provided in the question.
However, its obvious, you cant use e.g app/v2
kubectl get deployment.v2.apps/nginx-deployment
error: the server doesn't have a resource type "deployment"

Related

Kubernetes non specific spec.selector does not prevent Kubernetes from working correctly

I've experienced a surprising behavior when playing around with Kubernetes and I wanted to know if there is any good explanation behind it.
I've noticed that when two Kubernetes deployments are created with the same labels, and with the same spec.selector, the deployments still function correctly, even though using the same selector "should" cause them to be confused regarding which pods is related to each one.
Example configurations which present this -
example_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
extra_label: one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
example_deployment_2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx
extra_label: two
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I expected the deployments not to work correctly, since they will select pods from each other and assume it is theirs.
The actual result is that the deployments seem to be created correctly, but entering the deployment from k9s returns all of the pods. This is true for both deployments.
Can anyone please shed light regarding why this is happening? Is there additional internal filtering in Kubernetes to to prevent pods which were not really created by the deployment from being associated with it?
I'll note that I've seen this behavior in AWS and have reproduced it in Minikube.
When you create a K8S Deployment, K8S creates a ReplicaSet to manage the pods, then this ReplicaSet creates the pods based on the number of replicas provided or patched by the hpa. Addition to the provided labels and annotations you provide, the ReplicaSet add ownerReferences which contains its name and uid, so even if you have 4 pods with the same labels, each two pods will have a different ownerReferences used by the ReplicaSet to manage them:
apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: <replicaset name>
uid: <replicaset uid>
...

kubectl run not creating deployment

I'm running Kubernetes with docker desktop on windows. DD is up-to-date, and the kubectl version command returns 1.22 as both the client and server version.
I executed kubectl run my-apache --image httpd, then kubectl get all, which only shows
There is no deployment or replicaset as I expect. This means some commands, such as kubectl scale doesn't work. Any idea what's wrong? Thanks.
The kubectl run command create pod not deployment, I mean it used to create in the past before kubernetes version 1.18 I guess.
For deployment you have run this command
kubectl create deployment my-apache --image httpd
You basically spin up a pod in the default namespace, if you want to deploy your app using deployment you should have a deployment.yml file and use :
kubectl apply -f <depoyment_file>
Example to spin-up 3 Nginx pods:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Side note: There are 2 approaches when creating basic k8s objects:
IMPERATIVE way:
kubectl create deployment <DEPLOYMENT_NAME> --image=<IMAGE_NAME:TAG>
DECLARATIVE way:
kubectl create -f deployment.yaml

What is the current equivalent of `kubectl run --generator=run/v1`

I'm working through Kubernetes in Action (copyright 2018), and at least one of the examples is out-of-date with respect to current versions of kubectl.
Currently I'm stuck in section 2.3 on just trying to demo a simple web-server docker container ("kubia"):
kubectl run kubia --image=Dave/kubia --port=8080 --generator=run/v1
the --generator option has been removed from current versions of kubectl. What command(s) achieve the same end in the current version of kubectl?
Note: I'm literally just 2 chapters into learning about Kubernetes, so I don't really know what a deployment or anything else (so the official kubernetes docuementation doesn't help), I just need the simplest way to verify that that I can, in fact, run this container in my minikube "cluster".
in short , you can use following commands to create pods and deployments (imperative way) using following commands which are similar to the commands mentioned in that book :
To create a pod named kubia with image Dave/kubia
kubectl run kubia --image=Dave/kubia --port=8080
To create a deployment named kubia with image Dave/kubia
kubectl create deployment kubia --image=Dave/kubia --port=8080
You can just instantiated the pod, since --generator has been deprecated.
apiVersion: v1
kind: Pod
metadata:
name: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Alternatively, you can use a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia-deployment
labels:
app: kubia
spec:
replicas: 1
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Save either one to a something.yaml file and run
kubectl create -f something.yaml
And to clean up
kubectl delete -f something.yaml
✌️
If someone who read same book (Kubernetes in Action, copyright 2018) have same issue in the future, just run pod instead of the replication controller and expose pod instead of rc in following chapter.

Kubernetes deployment created but not listed

I've just started learning Kubernetes and I have created the deployment using the command kubectl run demo1-k8s --image=demo1-k8s:1.0 --port 8080 --image-pull-policy=Never. I got the message that deployment get created. But when I listed the deployment (kubectl get deployments), deployments not listed instead and I got the message No resources found in default namespace.
Any idea guys?
From the docs kubectl run creates a pod and not deployment. So you can use kubectl get pods command to check if the pod is created or not. For creating deployment use kubectl create deployment as documented here
for deployment creation you need to use kubectl create deployment. with kubectl run a pod will be created.
kubectl create deployment demo1-k8s --image=demo1-k8s:1.0
the template is like kubectl create deployment <deployment-name> --<flags>
but it's always better if you use yaml to create deployment or other k8s resource. just create a .yaml file.
deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo1-k8s
spec:
selector:
matchLabels:
app: demo1-k8s
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: demo1-k8s
spec:
containers:
- name: demo1-k8s
image: demo1-k8s:1.0
imagePullPolicy: Never
ports:
- containerPort: 8080
run this command kubectl apply -f deploy.yaml

How to pass number of pods by command line

I am using eks to deploy pods to my nodegroups. This is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: molding-app
namespace: new-simulator
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: eks-pods
image: 088562811725.dkr.ecr.ap-south-1.amazonaws.com/eks_pods:latest
ports:
- containerPort: 8080
- containerPort: 10010
I just wanted to know if there is anyway I could pass number of replicas through command line instead of writing it in the deployment file?
You can do it adding parameter --replicas.
$ kubectl create deployment molding-app --image=088562811725.dkr.ecr.ap-south-1.amazonaws.com/eks_pods:latest --replicas=3 -n <namespace>
deployment.apps/molding-app created
Later you can change it using scale
$ kubectl scale deployment molding-app --replicas=10 -n <namespace>
deployment.extensions/molding-app scaled
More details can be found in Kubernetes documentation about scaling deployment.
You can scale it from command line by using
kubectl scale deployment molding-app --replicas=3 -n namespace