I've just started learning kubernetes, in every tutorial the writer generally uses "kubectl .... deploymenst" to control the newly created deploys. Now, with those commands (ex kubectl get deploymets) i always get the response No resources found in default namespace., and i have to use "pods" instead of "deployments" to make things work (which works fine).
Now my question is, what is causing this to happen, and what is the difference between using a deployment or a pod? ? i've set the docker driver in the first minikube, it has something to do with this?
First let's brush up some terminologies.
Pod - It's the basic building block for Kubernetes. It groups one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
Deployment - It is a controller which wraps Pod/s and manages its life cycle, which is to say actual state to desired state. There is one more layer in between Deployment and Pod which is ReplicaSet : A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Below is the visualization:
Source: I drew it!
In you case what might have happened :
Either you have created a Pod not a Deployment. Therefore, when you do kubectl get deployment you don't see any resources. Note when you create Deployments it in turn creates a ReplicaSet for you and also creates the defined pods.
Or may be you created your deployment in a different namespace, if that's the case, then type this command to find your deployments in that namespace kubectl get deploy NAME_OF_DEPLOYMENT -n NAME_OF_NAMESPACE
More information to clarify your concepts:
Source
Below the section inside spec.template is the section which is supposedly your POD manifest if you were to create it manually and not take the deployment route. Now like I said earlier in simple terms Deployments are a wrapper to your PODs, therefore anything which you see outside the path spec.template is the configuration which you will need to defined on how you want to manage (scaling,affinity, e.t.c) your POD
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Deployment is a controller providing higher level abstraction on top of pods and ReplicaSets. A Deployment provides declarative updates for Pods and ReplicaSets. Deployments internally creates ReplicaSets within which pods are created.
Use cases of deployment is documented here
One reason for No resources found in default namespace could be that you created the deployment in a specific namespace and not in default namespace.
You can see deployments in a specific namespace or in all namespaces via
kubectl get deploy -n namespacename
kubectl get deploy -A
Related
I have this Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-webserver-nginx
annotations:
description: This is a demo deployment for nginx webserver
labels:
app: deployment-webserver-nginx
spec:
replicas: 3
selector:
matchLabels:
app: deployment-webserver-pods
template:
metadata:
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
My understanding on this Deployment object is that any Pod with app:deployment-webserver-pods label will be selected. Of course, this Deployment object is creating 3 replicas, but I wanted to add one more Pod explicitly like this, so I created a Pod object and had its label as app:deployment-webserver-pods, below is its Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: deployment-webserver-nginx-extra-pod
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 81
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
I even tried to first create this pod with this label, and then created my Deployment and thought that maybe now this explicit Pod will be picked but still that didn't happen.
Doesn't Labels and Selectors work like this?
I know I can scale by deployment to 4 Replicas, but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
From the official docs:
Note: You should not create other Pods whose labels match this
selector, either directly, by creating another Deployment, or by
creating another controller such as a ReplicaSet or a
ReplicationController. If you do so, the first Deployment thinks that
it created these other Pods. Kubernetes does not stop you from doing
this.
As described further in docs, it is not recommended to scale replicas of the deployments using the above approach.
Another important point to note from same section of docs:
If you have multiple controllers that have overlapping selectors, the
controllers will fight with each other and won't behave correctly.
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
The Deployment Controller does not work like that, it listen for Deployment-resources and "drive" them to desired state. That typically means, if any change in the template:-part, then a new ReplicaSet is created with the number of replicas. You cannot add a Pod to a Deployment in another way than changing replicas: - each instance is created from the same Pod-template and is identical.
Doesn't Labels and Selectors work like this?
... but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
Yes, Labels and Selectors are used for many things in Kubernetes, but not for everything. When you create a Deployment with a label, and a Pod with the same label and finally a Service with a selector - then the traffic addressed to that Service will distribute traffic to your instances of your Deployment as well as to your extra Pod.
Example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: deployment-webserver-pods
ports:
- protocol: TCP
port: 80
targetPort: 8080
Labels and Selector are also useful for management when using e.g. kubectl. You can add labels for Teams or e.g. App, then you can select all Deployments or Pods belonging to that Team or App (e.g. if the app consist of App-deployment and a cache-deployment), e.g:
kubectl get pods -l team=myteam,app=customerservice
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
Kubernetes is a system that operates "Declaratively" and not "Imperatively" which means you write down the desired state of the application in the cluster typically through a YAML file, and these declared desired states define all of the pieces of your application.
If a cluster were to configured imperatively like the way you are expecting it to be, it would have been very difficult to understand and replicate how the cluster came to be in that state.
Just to add in the above explanations that if we are trying to manually create pod and manage then what is the purpose of having controllers in K8s.
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
As per your yaml replicas:3 was already set so deployment would not take a new pod as the 4th replica.
Consider the following example provided in this doc.
What I'm trying to achieve is to see the 3 replicas names from inside the container.
following this guide I was able to get the current pod name, but i need also the pod names from my replicas.
Ideally i would like to:
print(k8s.get_my_replicaset_names())
or
print(os.getenv("MY_REPLICASET"))
and have a result like:
[frontend-b2zdv,frontend-vcmts,frontend-wtsmm]
that is the pod names of all the container's replicas (also the current container of course) and eventually compare the current name in the name list to get my index in the list.
Is there any way to achieve this?
As you can read here, the Downward API is used to expose Pod and Container fields to a running Container:
There are two ways to expose Pod and Container fields to a running
Container:
Environment variables
Volume Files
Together, these two ways of exposing Pod and Container fields are
called the Downward API.
It is not meant to expose any information about other objects/resources such as ReplicaSet or Deployment, that manage such a Pod.
You can see exactly what fields contains the yaml manifest that describes a running Pod by executing:
kubectl get pods <pod_name> -o yaml
The example fragment of its output may look as follows:
apiVersion: v1
kind: Pod
metadata:
annotations:
<some annotations here>
...
creationTimestamp: "2020-10-08T22:18:03Z"
generateName: nginx-deployment-7bffc778db-
labels:
app: nginx
pod-template-hash: 7bffc778db
name: nginx-deployment-7bffc778db-8fzrz
namespace: default
ownerReferences: 👈
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet 👈
name: nginx-deployment-7bffc778db 👈
...
As you can see, in metadata section it contains ownerReferences which in the above example contains one reference to a ReplicaSet object by which this Pod is managed. So you can get this particular ReplicaSet name pretty easily as it is part of a Pod yaml manifest.
However, you cannot get this way information about other Pods managed by this ReplicaSet .
Such information only can be obtained from the api server e.g by using kubectl client or programmatically with direct calls to the API.
I'm trying to understand why kubernetes docs recommend to specify service before deployment in one configuration file:
The resources will be created in the order they appear in the file. Therefore, it’s best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
Does it mean spread pods between kubernetes cluster nodes?
I tested with the following configuration where a deployment is located before a service and pods are distributed between nods without any issues.
apiVersion: apps/v1
kind: Deployment
metadata:
name: incorrect-order
namespace: test
spec:
selector:
matchLabels:
app: incorrect-order
replicas: 2
template:
metadata:
labels:
app: incorrect-order
spec:
containers:
- name: incorrect-order
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: incorrect-order
namespace: test
labels:
app: incorrect-order
spec:
type: NodePort
ports:
- port: 80
selector:
app: incorrect-order
Another explanation is that some environment variables with service URL will not be set for pods in this case. However it also works ok in case a configuration is inside one file like the example above.
Could you please explain why it is better to specify service before the deployment in case of one configuration file? Or may be it is some outdated recommendation.
If you use DNS as service discovery, the order of creation doesn't matter.
In case of Environment Vars (the second way K8S offers service discovery) the order matters, because once that vars are passed to the starting pod, they cannot be modified later if the service definition changes.
So if your service is deployed before you start your pod, the service envvars are injected inside the linked pod.
If you create a Pod/Deployment resource with labels, this resource will be exposed through a service once this last is created (with proper selector to indicate what resource to expose).
You are correct in that it effects the spread among the worker nodes.
Deployments without a Service will simply be scheduled onto the nodes with the least cpu/memory allocation. For instance, a brand new and empty node will get all new pods from a new deployment.
With a Deployment that also has a service the Scheduler tries to spread the pods between nodes, disregarding the cpu/memory load (within limits), to help the Service survive better.
It puzzles me that a Deployment on it's own doesn't cause a optimal spread but it doesn't, not yet at least.
This is the answer from the official documentation:
The resources will be created in the order they appear in the file.
Therefore, it's best to specify the service first, since that will
ensure the scheduler can spread the pods associated with the service
as they are created by the controller(s), such as Deployment.
Kubernetes Documentation/Concepts/Cluster/Administration/Managing Resources
I would like to deploy an application cluster by managing my deployment via k8s Deployment object. The documentation has me extremely confused. My basic layout has the following components that scale independently:
API server
UI server
Redis cache
Timer/Scheduled task server
Technically, all 4 above belong in separate pods that are scaled independently.
My questions are:
Do I need to create pod.yml files and then somehow reference them in deployment.yml file or can a deployment file also embed pod definitions?
K8s documentation seems to imply that the spec portion of Deployment is equivalent to defining one pod. Is that correct? What if I want to declaratively describe multi-pod deployments? Do I do need multiple deployment.yml files?
Pagids answer has most of the basics. You should create 4 Deployments for your scenario. Each deployment will create a ReplicaSet that schedules and supervises the collection of PODs for the Deployment.
Each Deployment will most likely also require a Service in front of it for access. I usually create a single yaml file that has a Deployment and the corresponding Service in it. Here is an example for an nginx.yaml that I use:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
name: nginx
targetPort: 80
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
Here some additional information for clarification:
A POD is not a scalable unit. A Deployment that schedules PODs is.
A Deployment is meant to represent a single group of PODs fulfilling a single purpose together.
You can have many Deployments work together in the virtual network of the cluster.
For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.
Deployments are meant to contain stateless services. If you need to store a state you need to create StatefulSet instead (e.g. for a database service).
You can use the Kubernetes API reference for the Deployment and you'll find that the spec->template field is of type PodTemplateSpec along with the related comment (Template describes the pods that will be created.) it answers you questions. A longer description can of course be found in the Deployment user guide.
To answer your questions...
1) The Pods are managed by the Deployment and defining them separately doesn't make sense as they are created on demand by the Deployment. Keep in mind that there might be more replicas of the same pod type.
2) For each of the applications in your list, you'd have to define one Deployment - which also makes sense when it comes to difference replica counts and application rollouts.
3) you haven't asked that but it's related - along with separate Deployments each of your applications will also need a dedicated Service so the others can access it.
additional information:
API server use deployment
UI server use deployment
Redis cache use statefulset
Timer/Scheduled task server maybe use a statefulset (If your service has some state in)
How does kubernetes choose the minion among many available for a given pod creation command? Is it something that can be controlled/tweaked ?
If replicated pods are submitted for deployment, is kubernetes intelligent enough to place them in different minions if they expose the same container/host port pair? Or does it always place different replicas in different minions ?
What about corner cases like what if two different pods (not necessarily replicas) that expose same host/container port pair are submitted? Will they carefully be placed on different minions ?
If a pod requires specific compute/memory requirements, can it be placed in a minion/host that has sufficient resources left to meet those requirement?
In summary, is there detailed documentation on kubernetes pod placement strategy?
Pods are scheduled to ports using the algorithm in generic_scheduler.go
There are rules that prevent host-port conflicts, and also to make sure that there are sufficient memory and cpu requirements. predicates.go
One way to choose minion for pod creation is using nodeSelector. Inside the yaml file of pod specify the label of minion for which you want to choose the minion.
apiVersion: v1
kind: Pod
metadata:
name: nginx1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
key: value