Kubernetes NetworkPolicies refused connection - kubernetes

I try to create a situation which is shown in the picture.
kubectl run frontend --image=nginx --labels="app=frontend" --port=30081 --expose
kubectl run backend --image=nginx --labels="app=backend" --port=30082 --expose
kubectl run database --image=nginx --labels="app=database" --port=30082
I created network policy which should block all ingress and egress access which do not have specific label definition.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: frontend
matchLabels:
app: backend
matchLabels:
app: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
matchLabels:
app: backend
matchLabels:
app: database
egress:
- to
- podSelector:
matchLabels:
app: frontend
matchLabels:
app: backend
matchLabels:
app: database
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
I tried to connect to pod frontend without label(command 1) and with correct label(command 2) as is shown below.
kubectl run busybox --image=busybox --rm -it --restart=Never -- wget
-O- http://frontend:30081 --timeout 2
kubectl run busybox --image=busybox --rm -it --restart=Never
--labels=app=frontend -- wget -O- http://frontend:30081 --timeout 2
I expected that first command which do not use label will be blocked and second command will allow communication but after pressed the second command i see output "wget: can't connect to remote host (10.109.223.254): Connection refused". Did I define network policy incorrectly?

As mentioned in kubernetes documentation about Network Policy
Prerequisites
Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
As far as I know flannel, which is used by katacoda does not support network policy.
controlplane $ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-4tmhm 1/1 Running 0 16m
coredns-66bff467f8-v2dbj 1/1 Running 0 16m
etcd-controlplane 1/1 Running 0 16m
katacoda-cloud-provider-58f89f7d9-brnk2 1/1 Running 8 16m
kube-apiserver-controlplane 1/1 Running 0 16m
kube-controller-manager-controlplane 1/1 Running 0 16m
kube-flannel-ds-amd64-h5lrd 1/1 Running 1 16m
kube-flannel-ds-amd64-sdl4b 1/1 Running 0 16m
kube-keepalived-vip-gkhbz 1/1 Running 0 16m
kube-proxy-6gd8d 1/1 Running 0 16m
kube-proxy-zkldz 1/1 Running 0 16m
kube-scheduler-controlplane 1/1 Running 1 16m
As mentioned here
Flannel is focused on networking. For network policy, other projects such as Calico can be used.
Additionally there is nice tutorial which show which CNI support network policy.
So I would say it´s not possible to do on katacoda playground.

Related

How to scale my app on nginx metrics without prometheus?

I want to scale my application based on custom metrics (RPS or active connections in this cases). Without having to set up prometheus or use any external service. I can expose this API from my web app. What are my options?
Monitoring different types of metrics (e.g. custom metrics) on most Kubernetes clusters is the foundation that leads to more stable and reliable systems/applications/workloads. As discussed in the comments section, to monitor custom metrics, it is recommended to use tools designed for this purpose rather than inventing a workaround. I'm glad that in this case the final decision was to use Prometheus and KEDA to properly scale the web application.
I would like to briefly show other community members who are struggling with similar considerations how KEDA works.
To use Prometheus as a scaler for Keda, we need to install and configure Prometheus.
There are many different ways to install Prometheus and you should choose the one that suits your needs.
I've installed the kube-prometheus stack with Helm:
NOTE: I allowed Prometheus to discover all PodMonitors/ServiceMonitors within its namespace, without applying label filtering by setting the prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues and prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues values to false.
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prom-1 prometheus-community/kube-prometheus-stack --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prom-1-kube-prometheus-sta-alertmanager-0 2/2 Running 0 2m29s
prom-1-grafana-865d4c8876-8zdhm 3/3 Running 0 2m34s
prom-1-kube-prometheus-sta-operator-6b5d5d8df5-scdjb 1/1 Running 0 2m34s
prom-1-kube-state-metrics-74b4bb7857-grbw9 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-2v2s6 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-4vc9k 1/1 Running 0 2m34s
prom-1-prometheus-node-exporter-7jchl 1/1 Running 0 2m35s
prometheus-prom-1-kube-prometheus-sta-prometheus-0 2/2 Running 0 2m28s
Then we can deploy an application that will be monitored by Prometheus. I've created a simple application that exposes some metrics (such as nginx_vts_server_requests_total) on the /status/format/prometheus path:
$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-1
spec:
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- name: app-1
image: mattjcontainerregistry/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: app-1
labels:
app: app-1
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: app-1
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our app-1 application:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: app-1
labels:
app: app-1
spec:
selector:
matchLabels:
app: app-1
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the app-1 logs to make sure that it is scrapped correctly:
$ kubectl get pods | grep app-1
app-1-5986d56f7f-2plj5 1/1 Running 0 35s
$ kubectl logs -f app-1-5986d56f7f-2plj5
10.44.1.6 - - [07/Feb/2022:16:31:11 +0000] "GET /status/format/prometheus HTTP/1.1" 200 2742 "-" "Prometheus/2.33.1" "-"
10.44.1.6 - - [07/Feb/2022:16:31:26 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3762 "-" "Prometheus/2.33.1" "-"
10.44.1.6 - - [07/Feb/2022:16:31:41 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3762 "-" "Prometheus/2.33.1" "-"
Now it's time to deploy KEDA. There are a few approaches to deploy KEDA runtime as described in the KEDA documentation.
I chose to install KEDA with Helm because it's very simple :-)
$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ kubectl create namespace keda
$ helm install keda kedacore/keda --namespace keda
The last thing we need to create is a ScaledObject which is used to define how KEDA should scale our application and what the triggers are. In the example below, I used the nginx_vts_server_requests_total metric.
NOTE: For more information on the prometheus trigger, see the Trigger Specification documentation.
$ cat scaled-object.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: scaled-app-1
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-1
pollingInterval: 30
cooldownPeriod: 120
minReplicaCount: 1
maxReplicaCount: 5
advanced:
restoreToOriginalReplicaCount: false
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: prometheus
metadata:
serverAddress: http://prom-1-kube-prometheus-sta-prometheus.default.svc:9090
metricName: nginx_vts_server_requests_total
query: sum(rate(nginx_vts_server_requests_total{code="2xx", service="app-1"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '10'
$ kubectl apply -f scaled-object.yaml
scaledobject.keda.sh/scaled-app-1 created
Finally, we can check if the app-1 application scales correctly based on the number of requests:
$ for a in $(seq 1 10000); do curl <PUBLIC_IP_APP_1> 1>/dev/null 2>&1; done
$ kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
keda-hpa-scaled-app-1 Deployment/app-1 0/10 (avg) 1 5 1
keda-hpa-scaled-app-1 Deployment/app-1 15/10 (avg) 1 5 2
keda-hpa-scaled-app-1 Deployment/app-1 12334m/10 (avg) 1 5 3
keda-hpa-scaled-app-1 Deployment/app-1 13250m/10 (avg) 1 5 4
keda-hpa-scaled-app-1 Deployment/app-1 12600m/10 (avg) 1 5 5
$ kubectl get pods | grep app-1
app-1-5986d56f7f-2plj5 1/1 Running 0 36m
app-1-5986d56f7f-5nrqd 1/1 Running 0 77s
app-1-5986d56f7f-78jw8 1/1 Running 0 94s
app-1-5986d56f7f-bl859 1/1 Running 0 62s
app-1-5986d56f7f-xlfp6 1/1 Running 0 45s
As you can see above, our application has been correctly scaled to 5 replicas.

Kuberenetes Available schedulars

How would I display available schedulers in my cluster in order to use non default one using the schedulerName field?
Any link to a document describing how to "install" and use a custom scheduler is highly appreciated :)
Thx in advance
Schedulers can be found among your kube-system pods. You can then filter the output to your needs with kube-scheduler as the search key:
➜ ~ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9wfkp 0/1 Completed 15 264d
coredns-6955765f44-jmz9j 1/1 Running 16 264d
etcd-acid-fuji 1/1 Running 17 264d
kube-apiserver-acid-fuji 1/1 Running 6 36d
kube-controller-manager-acid-fuji 1/1 Running 21 264d
kube-proxy-hs2qb 1/1 Running 0 177d
kube-scheduler-acid-fuji 1/1 Running 21 264d
You can retrieve the yaml file with:
➜ ~ kubectl get pods -n kube-system <scheduler pod name> -oyaml
If you bootstrapped your cluster with Kubeadm you may also find the yaml files in the /etc/kubernetes/manifests:
➜ manifests sudo cat /etc/kubernetes/manifests/kube-scheduler.yaml
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.17.6
imagePullPolicy: IfNotPresent
---------
The location for minikube is similar but you do have to login in the minikube's virtual machine first with minikube ssh.
For more reading please have a look how to configure multiple schedulers and how to write custom schedulers.
You can try this one:
kubectl get pods --all-namespaces | grep scheduler

Not able to connect to kafka brokers

I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.
I'm trying to expose it my using a TCP controller with nginx.
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
Now I'm trying to connect to my kafka instance.
When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
From the pod kafka-zookeeper-0 I'm gettting loads of
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Though I'm not sure these have anything to do with it?
Any ideas on what I'm doing wrong?
Thanks in advance.
TL;DR:
Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.
Change the service name and ports in you TCP NGINX Configmap and Ingress object.
Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090
Explanation:
The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints.
These Endpoints are then used to generate instance-specific DNS records in the form of:
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
It creates a DNS name for each pod, e.g:
[ root#curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
This is what makes this services connect to each other inside the cluster.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
The Nginx ConfigMap asks for: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>".
I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
I also realized that you are trying to expose cp-kafka:9092 which is the headless service, also only used internally, as I explained above.
In order to get outside access you have to set the parameters nodeport.enabled to true as stated here: External Access Parameters.
It adds one service to each kafka-N pod during chart deployment.
Then you change your configmap to map to one of them:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0 this is how the service identifies the pod it is intended to connect to.
Edit the nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
Set your kafka tools to <Cluster_External_IP>:31090
Reproduction:
- Snippet edited in cp-kafka/values.yaml:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
Deploy the chart:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
Create the TCP configmap:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
Edit the Nginx Ingress Controller:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
My ingress is on IP 35.226.189.123, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use kafka-client pod to test:
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user#minikube:~$ kubectl exec kafka-client -it -- bin/bash
root#kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root#kafka-client:/#
As you can see, I was able to access the kafka from outside.
If you need external access to Zookeeper as well I'll leave a service model for you:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
It will create a service for it:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
Patch your configmap:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
Add the Ingress rule:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
Test it with your external IP:
pod/zookeeper-client created
user#minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root#zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
If you have any doubts, let me know in the comments!

Kubernetes job and deployment

can I run a job and a deploy in a single config file/action
Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment?
Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called InitContainer:
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.
I'll create a initContainer with a busybox to run a command linux to wait for the service mydb to be running before proceeding with the deployment.
Steps to Reproduce:
- Create a Deployment with an initContainer which will run the job that needs to be completed before doing the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 2
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
restartPolicy: Always
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your sequelize job)
Now let's apply it see the status of the deployment:
$ kubectl apply -f my-app.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s
The pods are hold on Init:0/1 status waiting for the completion of the init container.
- Now let's create the service which the initcontainer is waiting to be running before completing his task:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
We will apply it and monitor the changes in the pods:
$ kubectl apply -f mydb-svc.yaml
service/mydb created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s
my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s
my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s
my-app-6b4fb4958f-44ds7 1/1 Running 0 95s
^C
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s
pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mydb ClusterIP 10.100.106.67 <none> 80/TCP 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app 2/2 2 2 99s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app-6b4fb4958f 2 2 2 99s
If you need help to apply this to your environment let me know.
Although initContainers are a viable option for this solution, there is another if you use helm to manage and deploy to your cluster.
Helm has chart hooks that allow you to run a Job before other installations in the helm chart occur. You mentioned that this is for a database migration before a service deployment. Some example helm config to get this done could be...
apiVersion: batch/v1
kind: Job
metadata:
name: api-migration-job
namespace: default
labels:
app: api-migration-job
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
spec:
containers:
- name: platform-migration
...
This will run the job to completion before moving on to the installation / upgrade phases in the helm chart. You can see there is a 'hook-weight' variable that allows you to order these hooks if you desire.
This in my opinion is a more elegant solution than init containers, and allows for better control.

How to change the default nodeport range on Mac (docker-desktop)?

How to change the default nodeport range on Mac (docker-desktop)?
I'd like to change the default nodeport range on Mac. Is it possible? I'm glad to have found this article: http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range. Since I can't find /etc/kubernetes/manifests/kube-apiserver.yaml in my environment, I tried to achieve what I want to do by running sudo kubectl edit pod kube-apiserver-docker-desktop --namespace=kube-system and add the parameter --service-node-port-range=443-22000. But when I tried to save it, I got the following error:
# pods "kube-apiserver-docker-desktop" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
(I get the same error even if I don't touch port 443.) Can someone please share his/her thoughts or experience? Thanks!
Append:
skwok-mbp:kubernetes skwok$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
docker compose 1/1 1 1 15d
docker compose-api 1/1 1 1 15d
ingress-nginx nginx-ingress-controller 1/1 1 1 37m
kube-system coredns 2/2 2 2 15d
skwok-mbp:kubernetes skwok$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default fortune-configmap-volume 2/2 Running 4 14d
default kubia-2qzmm 1/1 Running 2 15d
docker compose-6c67d745f6-qqmpb 1/1 Running 2 15d
docker compose-api-57ff65b8c7-g8884 1/1 Running 4 15d
ingress-nginx nginx-ingress-controller-756f65dd87-sq6lt 1/1 Running 0 37m
kube-system coredns-fb8b8dccf-jn8cm 1/1 Running 6 15d
kube-system coredns-fb8b8dccf-t6qhs 1/1 Running 6 15d
kube-system etcd-docker-desktop 1/1 Running 2 15d
kube-system kube-apiserver-docker-desktop 1/1 Running 2 15d
kube-system kube-controller-manager-docker-desktop 1/1 Running 29 15d
kube-system kube-proxy-6nzqx 1/1 Running 2 15d
kube-system kube-scheduler-docker-desktop 1/1 Running 30 15d
Update: The example from the documentation shows a way to adjust apiserver parameters during Minikube start:
minikube start --extra-config=apiserver.service-node-port-range=1-65535
--extra-config: A set of key=value pairs that describe configuration that may be passed to different components. The key should be '.' separated, and the first part before the dot is the component to apply the configuration to. Valid components are: kubelet, apiserver, controller-manager, etcd, proxy, scheduler. link
The list of available options could be found in CLI documentation
Another way to change kube-apiserver parameters for Docker-for-desktop on Mac:
login to Docker VM:
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
#(you can also use privileged container for the same purpose)
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
#or
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
# as suggested here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/5
# in case of minikube use the following command:
$ minikube ssh
Edit kube-apiserver.yaml (it's one of static pods, they are created by kubelet using files in /etc/kubernetes/manifests)
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
# for minikube
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following line to the pod spec:
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.65.3
...
- --service-node-port-range=443-22000 # <-- add this line
...
Save and exit. Pod kube-apiserver will be restarted with new parameters.
Exit Docker VM (for screen: Ctrl-a,k , for container: Ctrl-d )
Check the results:
$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
Create simple deployment and expose it with service:
$ kubectl run nginx1 --image=nginx --replicas=2
$ kubectl expose deployment nginx1 --port 80 --type=NodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx1 NodePort 10.99.173.234 <none> 80:14966/TCP 5s
As you can see NodePort was chosen from the new range.
There are other ways to expose your container: HostNetwork, HostPort, MetalLB
You need to add the correct security context for that purpose, check out how the ingress addon in minikube works, for example.
...
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
...
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL