what is the default allocation when resources are not specified in kubernetes? - kubernetes

Below is kubernetes POD definition
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
as I have not specified the resources, how much Memory & CPU will be allocated? Is there a kubectl to find what is allocated for the POD?

If resources are not specified for the Pod, the Pod will be scheduled to any node and resources are not considered when choosing a node.
The Pod might be "terminated" if it uses more memory than available or get little CPU time as Pods with specified resources will be prioritized. It is a good practice to set resources for your Pods.
See Configure Quality of Service for Pods - your Pod will be classified as "Best Effort":
For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.

In your case, kubernetes will assign QoS called BestEffort to your pod.
That's means kube-scheduler has no idea how to schedule your pod and just do its best.
That's also means your pod can consume any amount of resource(cpu/mem)it want (but the kubelet will evict it if anything goes wrong).
To see the resource cost of your pod, you can use kubectl top pod xxxx

Related

Verifying resources in a deployment yaml

In a deployment yaml, how we can verify that the resources we need for the running pods are guaranteed by the k8s?
Is there a way to figure that out?
Specify your resource request in the deployment YAML. The kube-scheduler will ensure the resources even before scheduling the pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 3
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
How Pods with resource requests are scheduled? Ref
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.
N.B.: However, if you want a container not to use more than its allowed resources, specify the limit too.
There is QoS (Quality of Service) Classes for running pods in k8s. There is an option that is guaranteing and restricting request and limits. This option is qosClass: Guaranteed.
To be able to make your pods' QoS's Guaranteed;
Every Container in the Pod must have a memory limit and a memory request.
For every Container in the Pod, the memory limit must equal the memory request.
Every Container in the Pod must have a CPU limit and a CPU request.
For every Container in the Pod, the CPU limit must equal the CPU request.
These restrictions apply to init containers and app containers equally.
also check out the reference page for more info :
https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/

Difference between Replication Controller and LivenessProbs in K8S

I'm just confused between the replication controller and livenessProbs in K8S. Could anyone explain this?
ReplicationController and livenessProbe have nothing common so it is really hard to be confused, moreover Kubernetes documentation (check links) has a great explanation for both these objects.
Replication controller is older version of replicasets.
replication controller basically manage the state of replicas running inside kubernetes cluster.
Replication controller use at cluster level.
Liveness probe use at pod level. Liveness probe constantly on a frequent base ping one endpoint and check for the service liveness.if service is not live it will restart the pod.
ReplicationController and livenessProbe have nothing common.
Replication Controller in K8s makes sure that a specified number of pod replicas are running at any one time. Those pods supposed to be always up and available.
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
Example Replication Controller config file:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Workflow of Replication Controllers:
More information you can find here: Replication Controller.
Useful articel: replication controller actions.
Liveness probe in K8s.
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container.
The kubelet can optionally perform and react to two kinds of probes on running Containers:
livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.
More information you can find here: pod lifecycle.
Useful article: Kubernetes probes.

How Kubernetes knows resource requests and limits?

Here is a yaml file that has been created to be deployed in kubernetes. I would like to know since there is no resource request and limits in the file, how kubernetes knows the resource requests and limits to run it? How can I fetch that information?
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
You can "kubectl describe" your pod and see what actual resources got assigned. With LimitRange Kubernetes can assign default requests and limits to pod if not part of its spec.
If there are no requests/limits assigned - your pod will become of Best Effort quality of service and can be Evicted in case of resource pressure on node.
you can use the below steps to fetch the resource limits assigned to the pod.
Create the pod
-------------------
kubectl run test-resource-limits --image=busybox --limits "memory=100Mi" \
--command -- /bin/sh -c "while true; do sleep 2; done"
Test the resource limits that are specified
-------------------------------------------
kubectl get pods test-resource-limits-7b8b46c8c7-jdjgs \
-o=jsonpath='{.spec.containers[0].resources}'
If you don't specify resource requests and limits. Kubernetes will run your workload without them. meaning your pod could potentially use all the CPU and RAM on the node.
Caveat to that; if your namespace has defaults set with a limitRange the defaults will be applied to workloads that don't specify resource spec.

Prometheus in k8s (metrics)

I deploy prometheus in kubernetes on this manual
As a storage scheme was invented:
Prometeus in kubernetes stores the metrics within 24 hours.
Prometheus not in kubernetes stores the metrics in 1 week.
A federation is set up between them.
Who faced with the fact that after removing the pods after a certain period of time (much less than 24 hours) metrics are missing on it.
This is perfectly normal if you do not have a persistent storage configured for your prometheus pod. You should use PV/PVC to define a stable place where you keep your prometheus data, otherwise if your pod is recreated, it starts with a clean slate.
PV/PVC needs dedicated storage servers in the cluster. If there is no money for storage servers, here is a cheaper approach:
Label a node:
$ kubectl label nodes <node name> prometheus=yes
Force all the prometheus pods to be created on the same labeled node by using nodeSelector:
nodeSelector:
prometheus: yes
Create an emptyDir volume for each prometheus pod. An emptyDir volume is first created when the Prometheus pod is assigned to the labeled node and exists as long as that pod is running on that node and is safe across container crashes and pod restarts.
spec:
containers:
- image: <prometheus image>
name: <prometheus pod name>
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
This approach makes all the Prometheus pods run on the same node with persistent storage for the metrics - a cheaper approach that prays the Prometheus node does not crash.

How are minions chosen for given pod creation request

How does kubernetes choose the minion among many available for a given pod creation command? Is it something that can be controlled/tweaked ?
If replicated pods are submitted for deployment, is kubernetes intelligent enough to place them in different minions if they expose the same container/host port pair? Or does it always place different replicas in different minions ?
What about corner cases like what if two different pods (not necessarily replicas) that expose same host/container port pair are submitted? Will they carefully be placed on different minions ?
If a pod requires specific compute/memory requirements, can it be placed in a minion/host that has sufficient resources left to meet those requirement?
In summary, is there detailed documentation on kubernetes pod placement strategy?
Pods are scheduled to ports using the algorithm in generic_scheduler.go
There are rules that prevent host-port conflicts, and also to make sure that there are sufficient memory and cpu requirements. predicates.go
One way to choose minion for pod creation is using nodeSelector. Inside the yaml file of pod specify the label of minion for which you want to choose the minion.
apiVersion: v1
kind: Pod
metadata:
name: nginx1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
key: value