Following the Deployment example in the docs. I'm trying to deploy the example nginx. With the following config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
So far, the deployment always hangs. I tried to see if for any reason I needed a pod named nginx to be deployed already. That didn't solve the problem.
$ sudo kubectl get deployments
NAME UPDATEDREPLICAS AGE
nginx-deployment 0/3 34m
$ sudo kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Sat, 30 Jan 2016 06:03:47 +0000
Labels: app=nginx
Selector: app=nginx
Replicas: 0 updated / 3 total
StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds
OldReplicationControllers: nginx (2/2 replicas created)
NewReplicationController: <none>
No events.
When I check the events from kubernetes I see no events which belong to this deployment. Has anyone experienced this before?
The versions are as followed:
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
If the deployment is not creating any pods you could have a look at the events an error might be reported there for example:
kubectl get events --all-namespaces
NAMESPACE LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
default 8m 2d 415 wordpress Ingress Normal Service loadbalancer-controller no user specified default backend, using system default
kube-lego 2m 8h 49 kube-lego-7c66c7fddf ReplicaSet Warning FailedCreate replicaset-controller Error creating: pods "kube-lego-7c66c7fddf-" is forbidden: service account kube-lego/kube-lego2-kube-lego was not found, retry after the service account is created
Also have a look at kubectl get rs --all-namespaces.
I found an answer from the issues page
In order to get the deployments to work after you enable it and restart the kube-apiserver, you must also restart the kube-controller-manager.
You can check what is wrong with command kubectl describe pod name_of_your_pod
Related
Cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io "grafanaps-tls" not found"
So , from the investigation , I’m not able to find the grafanaps-tls
Kubectl get certificates
NAME READY SECRET AGE
Alertmanagerdf-tls False alertmanagerdf-tls 1y61d
Prometheusps-tls False prometheusps-tls 1y58
We have do this followings : The nginx ingress and cert-manager were outdated and not compatible with the Kubernetes version of 1.22 anymore. As a result, an upgrade of those components was initiated in order to restore pod operation.
The cmctl check api -n cert-manager command now returns: The cert-manager API has been upgraded to version 1.7 and orphaned secrets have been cleaned up
Cert-manager/webhook "msg"="Detected root CA rotation - regenerating serving certificates"
After a restart the logs looked mainly clean.
For my finding , the issue is integration of cert-manager with the Kubernetes ingress controlle .
So I was interest in cert-manager configuration mostly on ingressshim configuration and args section
It appears that the SSL certificate for several servers has expired and looks like the issue with the certificate resources or the integration of cert-manager with the Kubernetes ingress controller.
Config:
C:\Windows\system32>kubectl describe deployment cert-manager-cabictor -n cert-manager
Name: cert-manager-cabictor
Namespace: cert-manager
CreationTimestamp: Thu, 01 Dec 2022 18:31:02 +0530
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Annotations: deployment.kubernetes.io/revision: 2
meta.helm.sh/release-name: cert-manager
meta.helm.sh/release-namespace: cert-manager
Selector: app.kubernetes.io/component=cabictor ,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cabictor
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Service Account: cert-manager-cabictor
Containers:
cert-manager:
Image: quay.io/jetstack/cert-manager-cabictor :v1.7.3
Port: <none>
Host Port: <none>
Args:
--v=2
--leader-election-namespace=kube-system
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: cert-manager-cabictor -5b65bcdbbd (1/1 replicas created)
Events: <none>
I was not able to identify and fix the root cause here ..
What is the problem here, and how can it be resolved? Any help would be greatly appreciated
Imagine the following deployment definition in kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
env: staging
spec:
...
I have two questions in particular:
1). The label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
2). When pod is created/updated, how can I found which deployment it belongs to?
1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
You can get the Deployment using client-go. See the example Create, Update & Delete Deployment for operations on a Deployment.
2). when pod is created/updated, how can I found which deployment it belongs to?
When a Deployment is created, a ReplicaSet is created that manage the Pods.
See the ownerReferences field of a Pod to see what ReplicaSet manages it. This is described in How a ReplicaSet works
hope you are enjoying your kubernetes journey !
In fact the label won't be available in created pods but you can add it to the manifest, in the pod section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
#Here you have the deployment labels
app: nginx
spec:
selector:
matchLabels:
#Here you have the selector that indicates to the deployment
#(more exactly to the replicatsets of the deployment)
#which pod to track to check if the number of replicas is respected.
app: nginx
...
template:
metadata:
labels:
#Here you have the POD labels that needs to match in the selector.matchlabels section
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
...
you can check the pods' labels by typing:
❯ k get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
you can get the deployments' labels by typing:
❯ k get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deploy 3/3 3 3 7m39s app=nginx
you can add a custom column in your "kubectl get po" command to display the value of each "app" labels when getting the pods:
❯ k get pod -L app
NAME READY STATUS RESTARTS AGE APP
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 8m30s nginx
and you can use multiple -L :
❯ k get pod -L app -L test
NAME READY STATUS RESTARTS AGE APP TEST
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 9m46s nginx
In general, the names of the pod begin by the name of their owner (deployment, replicaset, statefulset, job etc)
When you use a deployment to create a pod, you can be sure that between the deployment and the pod there is a replicaset (The deployment only manages the differents version of the replicaset, while the replicaset only ENSURES that the current number of actual replicas is matching the demanded number of replicas in the manifes, with labels selector ! )
So you in fact, checks the ownerReference filed of a pod, by typing:
❯ kubectl get po -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd-5qlhg nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-pgkhb nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-xdz59 nginx-deploy-6bdc4445fd ReplicaSet
can do the same with replicasets to get their deployments owner:
❯ kubectl get rs -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd nginx-deploy Deployment
thats how you can quickly see withs kubectl who owns who
here is a little reading about owners and dependants: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
hope this has helped you. bguess
I want to scale my application based on custom metrics (RPS or active connections in this cases). Without having to set up prometheus or use any external service. I can expose this API from my web app. What are my options?
Monitoring different types of metrics (e.g. custom metrics) on most Kubernetes clusters is the foundation that leads to more stable and reliable systems/applications/workloads. As discussed in the comments section, to monitor custom metrics, it is recommended to use tools designed for this purpose rather than inventing a workaround. I'm glad that in this case the final decision was to use Prometheus and KEDA to properly scale the web application.
I would like to briefly show other community members who are struggling with similar considerations how KEDA works.
To use Prometheus as a scaler for Keda, we need to install and configure Prometheus.
There are many different ways to install Prometheus and you should choose the one that suits your needs.
I've installed the kube-prometheus stack with Helm:
NOTE: I allowed Prometheus to discover all PodMonitors/ServiceMonitors within its namespace, without applying label filtering by setting the prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues and prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues values to false.
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prom-1 prometheus-community/kube-prometheus-stack --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prom-1-kube-prometheus-sta-alertmanager-0 2/2 Running 0 2m29s
prom-1-grafana-865d4c8876-8zdhm 3/3 Running 0 2m34s
prom-1-kube-prometheus-sta-operator-6b5d5d8df5-scdjb 1/1 Running 0 2m34s
prom-1-kube-state-metrics-74b4bb7857-grbw9 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-2v2s6 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-4vc9k 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-7jchl 1/1 Running 0 2m35s
prometheus-prom-1-kube-prometheus-sta-prometheus-0 2/2 Running 0 2m28s
Then we can deploy an application that will be monitored by Prometheus. I've created a simple application that exposes some metrics (such as nginx_vts_server_requests_total) on the /status/format/prometheus path:
$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-1
spec:
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: app-1
image: mattjcontainerregistry/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: app-1
labels:
app: app-1
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: app-1
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our app-1 application:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: app-1
labels:
app: app-1
spec:
selector:
matchLabels:
app: app-1
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the app-1 logs to make sure that it is scrapped correctly:
$ kubectl get pods | grep app-1
app-1-5986d56f7f-2plj5 1/1 Running 0 35s
$ kubectl logs -f app-1-5986d56f7f-2plj5
10.44.1.6 - - [07/Feb/2022:16:31:11 +0000] "GET /status/format/prometheus HTTP/1.1" 200 2742 "-" "Prometheus/2.33.1" "-"
10.44.1.6 - - [07/Feb/2022:16:31:26 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3762 "-" "Prometheus/2.33.1" "-"
10.44.1.6 - - [07/Feb/2022:16:31:41 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3762 "-" "Prometheus/2.33.1" "-"
Now it's time to deploy KEDA. There are a few approaches to deploy KEDA runtime as described in the KEDA documentation.
I chose to install KEDA with Helm because it's very simple :-)
$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ kubectl create namespace keda
$ helm install keda kedacore/keda --namespace keda
The last thing we need to create is a ScaledObject which is used to define how KEDA should scale our application and what the triggers are. In the example below, I used the nginx_vts_server_requests_total metric.
NOTE: For more information on the prometheus trigger, see the Trigger Specification documentation.
$ cat scaled-object.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: scaled-app-1
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-1
pollingInterval: 30
cooldownPeriod: 120
minReplicaCount: 1
maxReplicaCount: 5
advanced:
restoreToOriginalReplicaCount: false
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: prometheus
metadata:
serverAddress: http://prom-1-kube-prometheus-sta-prometheus.default.svc:9090
metricName: nginx_vts_server_requests_total
query: sum(rate(nginx_vts_server_requests_total{code="2xx", service="app-1"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '10'
$ kubectl apply -f scaled-object.yaml
scaledobject.keda.sh/scaled-app-1 created
Finally, we can check if the app-1 application scales correctly based on the number of requests:
$ for a in $(seq 1 10000); do curl <PUBLIC_IP_APP_1> 1>/dev/null 2>&1; done
$ kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
keda-hpa-scaled-app-1 Deployment/app-1 0/10 (avg) 1 5 1
keda-hpa-scaled-app-1 Deployment/app-1 15/10 (avg) 1 5 2
keda-hpa-scaled-app-1 Deployment/app-1 12334m/10 (avg) 1 5 3
keda-hpa-scaled-app-1 Deployment/app-1 13250m/10 (avg) 1 5 4
keda-hpa-scaled-app-1 Deployment/app-1 12600m/10 (avg) 1 5 5
$ kubectl get pods | grep app-1
app-1-5986d56f7f-2plj5 1/1 Running 0 36m
app-1-5986d56f7f-5nrqd 1/1 Running 0 77s
app-1-5986d56f7f-78jw8 1/1 Running 0 94s
app-1-5986d56f7f-bl859 1/1 Running 0 62s
app-1-5986d56f7f-xlfp6 1/1 Running 0 45s
As you can see above, our application has been correctly scaled to 5 replicas.
Nearly 3 years ago, Kubernetes would not carry out a rolling deployment if you had a single replica (Kubernetes deployment does not perform a rolling update when using a single replica).
Is this still the case? Is there any additional configuration required for this to work?
You are not required to have a minimum number of replicas to rollout an update using Kubernetes Rolling Update anymore.
I tested it on my lab (v1.17.4) and it worked like a charm having only one replica.
You can test it yourself using this Katakoda Lab: Interactive Tutorial - Updating Your App
This lab is setup to create a deployment with 3 replicas. Before starting the lab, edit the deployment and change the number of replicas to one and follow the lab steps.
I created a lab using different example similar to your previous scenario. Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.16.1
ports:
- containerPort: 80
Deployment is running with one replica only:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 5s
Here I edited my nginx-deployment.yaml and changed the version of nginx to nginx:latest and rolled out my deployment running replace:
$ kubectl replace -f nginx-deployment.yaml
deployment.apps/nginx-deployment replaced
Another option is to change the nginx version using the kubectl set image command:
kubectl set image deployment/nginx-deployment nginx-container=nginx:latest --record
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 0/1 ContainerCreating 0 3s
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 48s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 6s
nginx-deployment-6c4699c59c-w8clt 0/1 Terminating 0 51s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 13s
As you can see, everything worked normally with only one replica.
In the latest version of the documentation we can read:
Deployment ensures that only a certain number of Pods are down while
they are being updated. By default, it ensures that at least 75% of
the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created
above the desired number of Pods. By default, it ensures that at most
125% of the desired number of Pods are up (25% max surge).
can I run a job and a deploy in a single config file/action
Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment?
Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called InitContainer:
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.
I'll create a initContainer with a busybox to run a command linux to wait for the service mydb to be running before proceeding with the deployment.
Steps to Reproduce:
- Create a Deployment with an initContainer which will run the job that needs to be completed before doing the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 2
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
restartPolicy: Always
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your sequelize job)
Now let's apply it see the status of the deployment:
$ kubectl apply -f my-app.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s
The pods are hold on Init:0/1 status waiting for the completion of the init container.
- Now let's create the service which the initcontainer is waiting to be running before completing his task:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
We will apply it and monitor the changes in the pods:
$ kubectl apply -f mydb-svc.yaml
service/mydb created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s
my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s
my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s
my-app-6b4fb4958f-44ds7 1/1 Running 0 95s
^C
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s
pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mydb ClusterIP 10.100.106.67 <none> 80/TCP 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app 2/2 2 2 99s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app-6b4fb4958f 2 2 2 99s
If you need help to apply this to your environment let me know.
Although initContainers are a viable option for this solution, there is another if you use helm to manage and deploy to your cluster.
Helm has chart hooks that allow you to run a Job before other installations in the helm chart occur. You mentioned that this is for a database migration before a service deployment. Some example helm config to get this done could be...
apiVersion: batch/v1
kind: Job
metadata:
name: api-migration-job
namespace: default
labels:
app: api-migration-job
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
spec:
containers:
- name: platform-migration
...
This will run the job to completion before moving on to the installation / upgrade phases in the helm chart. You can see there is a 'hook-weight' variable that allows you to order these hooks if you desire.
This in my opinion is a more elegant solution than init containers, and allows for better control.