I'm new to Kubernetes. I try to scale my pods. First I started 3 pods:
./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
There were starting 3 pods. First I tried to scale up/down by using a replicationcontroller but this did not exist. It seems to be a replicaSet now.
./cluster/kubectl.sh get rs
NAME DESIRED CURRENT AGE
my-nginx-2494149703 3 3 9h
I tried to change the amount of replicas described in my replicaset:
./cluster/kubectl.sh scale --replicas=5 rs/my-nginx-2494149703
replicaset "my-nginx-2494149703" scaled
But I still see my 3 original pods
./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-2494149703-04xrd 1/1 Running 0 9h
my-nginx-2494149703-h3krk 1/1 Running 0 9h
my-nginx-2494149703-hnayu 1/1 Running 0 9h
I would expect to see 5 pods.
./cluster/kubectl.sh describe rs/my-nginx-2494149703
Name: my-nginx-2494149703
Namespace: default
Image(s): nginx
Selector: pod-template-hash=2494149703,run=my-nginx
Labels: pod-template-hash=2494149703
run=my-nginx
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Why isn't it scaling up? Do I also have to change something in the deployment?
I see something like this when I describe my rs after scaling up:
(Here I try to scale from one running pod to 3 running pods). But it remains one running pod. The other 2 are started and killed immediatly
34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-lylsz
34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-5rv8u
34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-lylsz
34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-5rv8u
This is working for me
kubectl scale --replicas=<expected_replica_num> deployment <deployment_label_name> -n <namespace>
Example
# kubectl scale --replicas=3 deployment xyz -n my_namespace
TL;DR: You need to scale your deployment instead of the replica set directly.
If you try to scale the replica set, then it will (for a very short time) have a new count of 5. But the deployment controller will see that the current count of the replica set is 5 and since it knows that it is supposed to be 3, it will reset it back to 3. By manually modifying the replica set that was created for you, you are fighting with the system controller (which is untiring and will pretty much always outlast you).
kubectl run my-nginx --image=nginx --replicas=3 --port=80 in this kubectl run will create a deployment or job to manage the created container(s).
Deployment-->ReplicaSet-->Pod this is how kubernetes works.
If you change the bottom-level object, its higher-level object will undo your change.You have to change the top-level object.
scale it down to zero and then to the number of pods you required (guess it equals to 3)
kubectl scale deployment <deployment-name> --replicas=0 -n <namespace>
kubectl scale deployment <deployment-name> --replicas=3 -n <namespace>
Not sure if this is the best way as I'm starting out with kubernetes, but I did this by updating my yaml file
# app.yaml
apiVersion: apps/v1
...
spec:
replicas: <new value>
and running $ kubectl scale -f app.yaml --replicas=<new value>
you can verify your new number of replicas by running $ kubectl get pods
In my case I was also interested in scaling back my VMs, on google cloud. I did this with $ gcloud container clusters resize appName --size=1 --zone "my-zone"
Below example shows how you should scale up/down your "pods/resource/deployments".
k8smaster#k8smaster:~/debashish$ more createdeb_deployment1.yaml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: debdeploy-webserver
spec:
replicas: 1
selector:
matchLabels:
app: debdeploy1webserver
template:
metadata:
labels:
app: debdeploy1webserver
spec:
containers:
-
image: "docker.io/debu3645/debapachewebserver:v1"
name: deb-deploy1-container
ports:
-
containerPort: 6060
deployment created -->
**kubectl -n debns1 create -f createdeb_deployment1.yaml**
k8smaster#k8smaster:~/debashish$ `kubectl scale --replicas=5 **deployment**/debdeploy-webserver -n debns1`
(Scale up 5 deployments)
k8smaster#k8smaster:~/debashish$ kubectl get pods -n debns1
NAME READY STATUS RESTARTS AGE
debdeploy-webserver-7cf4fb74c5-8wvzx 1/1 Running 0 16s
debdeploy-webserver-7cf4fb74c5-jrf6v 1/1 Running 0 16s
debdeploy-webserver-7cf4fb74c5-m9fpw 1/1 Running 0 16s
debdeploy-webserver-7cf4fb74c5-q9n7r 1/1 Running 0 16s
debdeploy-webserver-7cf4fb74c5-ttw6p 1/1 Running 1 19h
resourcepod-deb1 1/1 Running 5 6d18h
k8smaster#k8smaster:~/debashish$ **kubectl get ep -n debns1**
NAME ENDPOINTS AGE
frontend-svc-deb 192.168.1.10:80,192.168.1.11:80,192.168.1.12:80 + 2 more... 18h
frontend-svc1-deb 192.168.1.8:80 14d
frontend-svc2-deb 192.168.1.8:80 5d19h
k8smaster#k8smaster:~/debashish$ **kubectl scale --replicas=2** deployment/debdeploy-webserver -n debns1
(Scale down from 5 to 2)
deployment.extensions/debdeploy-webserver scaled
k8smaster#k8smaster:~/debashish$ **kubectl get pods -n debns1**
NAME READY STATUS RESTARTS AGE
debdeploy-webserver-7cf4fb74c5-8wvzx 1/1 Terminating 0 35m
debdeploy-webserver-7cf4fb74c5-jrf6v 1/1 Terminating 0 35m
debdeploy-webserver-7cf4fb74c5-m9fpw 1/1 Terminating 0 35m
debdeploy-webserver-7cf4fb74c5-q9n7r 1/1 Running 0 35m
debdeploy-webserver-7cf4fb74c5-ttw6p 1/1 Running 1 19h
resourcepod-deb1 1/1 Running 5 6d19h
k8smaster#k8smaster:~/debashish$ **kubectl get pods -n debns1**
NAME READY STATUS RESTARTS AGE
debdeploy-webserver-7cf4fb74c5-q9n7r 1/1 Running 0 37m
debdeploy-webserver-7cf4fb74c5-ttw6p 1/1 Running 1 19h
resourcepod-deb1 1/1 Running 5 6d19h
k8smaster#k8smaster:~/debashish$ kubectl **scale --current-replicas=4 --replicas=2** deployment/debdeploy-webserver -n debns1 (Check the current no. of deployments. If current replication is 4, then bring it down to 2, else dont do anything)
error: Expected replicas to be 4, was 2
k8smaster#k8smaster:~/debashish$ **kubectl scale --current-replicas=3 --replicas=10 deployment/debdeploy-webserver -n debns1**
error: Expected replicas to be 3, was 2
k8smaster#k8smaster:~/debashish$ **kubectl scale --current-replicas=2 --replicas=10 deployment/debdeploy-webserver -n debns1**
deployment.extensions/debdeploy-webserver scaled
k8smaster#k8smaster:~/debashish$ **kubectl get pods -n debns1**
NAME READY STATUS RESTARTS AGE
debdeploy-webserver-7cf4fb74c5-46bxg 1/1 Running 0 6s
debdeploy-webserver-7cf4fb74c5-d6qsx 0/1 ContainerCreating 0 6s
debdeploy-webserver-7cf4fb74c5-fdq6v 1/1 Running 0 6s
debdeploy-webserver-7cf4fb74c5-gd87t 1/1 Running 0 6s
debdeploy-webserver-7cf4fb74c5-kqdbj 0/1 ContainerCreating 0 6s
debdeploy-webserver-7cf4fb74c5-q9n7r 1/1 Running 0 47m
debdeploy-webserver-7cf4fb74c5-qjvm6 1/1 Running 0 6s
debdeploy-webserver-7cf4fb74c5-skxq4 0/1 ContainerCreating 0 6s
debdeploy-webserver-7cf4fb74c5-ttw6p 1/1 Running 1 19h
debdeploy-webserver-7cf4fb74c5-wlc7q 0/1 ContainerCreating 0 6s
resourcepod-deb1 1/1 Running 5 6d19h
for deployment
kubectl scale deployment <deployment-name> --replicas=3 -n <namespace>
for statefulset
kubectl scale statefulsets <stateful-set-name> --replicas=3 -n <namespace>
Related
I have a use-case for concurrent restart of all pods in a statefulset.
Does kubernetes statefulset support concurrent restart of all pods?
According to the statefulset documentation, this can be accomplished by setting the pod update policy to parallel as in this example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-db
spec:
podManagementPolicy: Parallel
replicas: 3
However this does not seem to work in practice, as demonstrated on this statefulset running on EKS:
Apply this:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: producer
namespace: ragnarok
spec:
selector:
matchLabels:
app: producer
replicas: 10
podManagementPolicy: "Parallel"
serviceName: producer-service
template:
metadata:
labels:
app: producer
spec:
containers:
- name: producer
image: archbungle/load-tester:pulsar-0.0.49
imagePullPolicy: IfNotPresent
Rollout restart happens in sequence as if disregarding the rollout policy setting:
(base) welcome#Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 3m58s
producer-1 1/1 Running 0 3m56s
producer-2 1/1 Running 0 3m53s
producer-3 1/1 Running 0 3m47s
producer-4 1/1 Running 0 3m45s
producer-5 1/1 Running 0 3m43s
producer-6 1/1 Running 1 3m34s
producer-7 0/1 ContainerCreating 0 1s
producer-8 1/1 Running 0 16s
producer-9 1/1 Running 0 19s
(base) welcome#Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 4m2s
producer-1 1/1 Running 0 4m
producer-2 1/1 Running 0 3m57s
producer-3 1/1 Running 0 3m51s
producer-4 1/1 Running 0 3m49s
producer-5 1/1 Running 0 3m47s
producer-6 0/1 Terminating 1 3m38s
producer-7 1/1 Running 0 5s
producer-8 1/1 Running 0 20s
producer-9 1/1 Running 0 23s
(base) welcome#Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 4m8s
producer-1 1/1 Running 0 4m6s
producer-2 1/1 Running 0 4m3s
producer-3 1/1 Running 0 3m57s
producer-4 1/1 Running 0 3m55s
producer-5 0/1 Terminating 0 3m53s
producer-6 1/1 Running 0 4s
producer-7 1/1 Running 0 11s
producer-8 1/1 Running 0 26s
producer-9 1/1 Running 0 29s
As the document pointed, Parallel pod management will effective only in the scaling operations. This option only affects the behavior for scaling operations. Updates are not affected.
Maybe you can try something like
kubectl scale statefulset producer --replicas=0 -n ragnarok
and
kubectl scale statefulset producer --replicas=10 -n ragnarok
According to documentation, all pods should be deleted and created together by scaling them with the Parallel policy.
Reference : https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management
I'm trying to understand how kubernetes works, so I tried to do this operation for my minikube:
~ kubectl delete pod --all -n kube-system
pod "coredns-f9fd979d6-5n4b6" deleted
pod "etcd-minikube" deleted
pod "kube-apiserver-minikube" deleted
pod "kube-controller-manager-minikube" deleted
pod "kube-proxy-879lg" deleted
pod "kube-scheduler-minikube" deleted
It's okay. Pods deleted as wish. But if I do kubectl get pods -n kube-system I will see:
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-5d25r 1/1 Running 0 50s
etcd-minikube 1/1 Running 0 50s
kube-apiserver-minikube 1/1 Running 0 50s
kube-controller-manager-minikube 1/1 Running 0 50s
kube-proxy-nlw69 1/1 Running 0 43s
kube-scheduler-minikube 1/1 Running 0 49s
Okay. I thought it's ReplicaSet or DaemonSet:
➜ ~ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 18m
➜ ~ kubectl get rs -n kube-system
NAME DESIRED CURRENT READY AGE
coredns-f9fd979d6 1 1 1 18m
It is true for coredns and kube-proxy. But what about others (apiserver, etcd, controller and scheduler)? Why are they still alive?
The control plane pods are run as static Pods - static Pods are not managed by the control plane controllers like e.g. DaemonSet and ReplicaSet. Static pods are instead managed by the Kubelet daemon on the local node directly.
I'm trying to delete some old deployments / replicasets I have in my cluster but when I run kubectl delete deployment
It'll say the deployment is deleted and the pod from that deployment is Terminating, but then a few seconds later the deployment is magically recreated and the pod comes back.
This is the same result for another replicaset I have.
What could be re-creating these deployments / replicasets and how can I stop it so I can permanently delete these deployments/rs?
Edit: Here's some output. This is on a kubernetes cluster in GKE btw:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 41m
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
quickstart-kb-f9b65577f-4fxph 1/1 Running 0 40m
kubectl delete deployment quickstart-kb
deployment.extensions "quickstart-kb" deleted
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 0/1 1 0 7s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 0/1 Running 0 11s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 4m6s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 1/1 Running 0 4m13s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
I think your deployment object is created with the deployment of some custom resources (CRD).
When you created the CRD, the CRD controller created the deployment object. So, even if you delete the deployment object, the CRD controller re-creates it.
Delete the CRD object itself, to delete the deployment and other objects (if any) that were created with it.
From the name, it seems like Kibana CRD object:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
Use the following command to delete the Kibana object:
$ kubectl delete Kibana quickstart-kb
I am building a Kubernetes cluster following this tutorial, and I have troubles to access the Kubernetes dashboard. I already created another question about it that you can see here, but while digging up into my cluster, I think that the problem might be somewhere else and that's why I create a new question.
I start my master, by running the following commands:
> kubeadm reset
> kubeadm init --apiserver-advertise-address=[MASTER_IP] > file.txt
> tail -2 file.txt > join.sh # I keep this file for later
> kubectl apply -f https://git.io/weave-kube/
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 2m46s
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 2m46s
etcd-kubemaster 1/1 Running 0 93s
kube-apiserver-kubemaster 1/1 Running 0 93s
kube-controller-manager-kubemaster 1/1 Running 0 113s
kube-proxy-lxhvs 1/1 Running 0 2m46s
kube-scheduler-kubemaster 1/1 Running 0 93s
Here we can see that I have two coredns pods stuck in Pending state forever, and when I run the command :
> kubectl -n kube-system describe pod coredns-fb8b8dccf-kb2zq
I can see in the Events part the following Warning :
Failed Scheduling : 0/1 nodes are available 1 node(s) had taints that the pod didn't tolerate.
Since it is a Warning and not and Error, and that as a Kubernetes newbie, taints does not mean much to me, I tried to connect a node to the master (using the previously saved command) :
> cat join.sh
kubeadm join [MASTER_IP]:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[ANOTHER_TOKEN]
> ssh [USER]#[WORKER_IP] 'bash' < join.sh
This node has joined the cluster.
On the master, I check that the node is connected:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 13m v1.14.1
kubeslave1 NotReady <none> 31s v1.14.1
And I check my pods :
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 14m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 14m
etcd-kubemaster 1/1 Running 0 13m
kube-apiserver-kubemaster 1/1 Running 0 13m
kube-controller-manager-kubemaster 1/1 Running 0 13m
kube-proxy-lxhvs 1/1 Running 0 14m
kube-proxy-xllx4 0/1 ContainerCreating 0 2m16s
kube-scheduler-kubemaster 1/1 Running 0 13m
We can see that another kube-proxy pod have been created and is stuck in ContainerCreating status.
And when I am doing a describe again :
kubectl -n kube-system describe pod kube-proxy-xllx4
I can see in the Events part multiple identical Warnings :
Failed create pod sandbox : rpx error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp: lookup k8s.gcr.io on [::1]:53 read up [::1]43133->[::1]:53: read: connection refused
Here are my repositories :
docker image ls
REPOSITORY TAG
k8s.gcr.io/kube-proxy v1.14.1
k8s.gcr.io/kube-apiserver v1.14.1
k8s.gcr.io/kube-controller-manager v1.14.1
k8s.gcr.io/kube-scheduler v1.14.1
k8s.gcr.io/coredns 1.3.1
k8s.gcr.io/etcd 3.3.10
k8s.gcr.io/pause 3.1
And so, for the dashboard part, I tried to start it with the command
> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
But the dashboard pod is stuck in Pending state.
kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 40m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 40m
etcd-kubemaster 1/1 Running 0 38m
kube-apiserver-kubemaster 1/1 Running 0 38m
kube-controller-manager-kubemaster 1/1 Running 0 39m
kube-proxy-lxhvs 1/1 Running 0 40m
kube-proxy-xllx4 0/1 ContainerCreating 0 27m
kube-scheduler-kubemaster 1/1 Running 0 38m
kubernetes-dashboard-5f7b999d65-qn8qn 1/1 Pending 0 8s
So, event though my problem originaly was that I cannot access to my dashboard, I guess that the real problem is deeper thant that.
I know that I just put a lot of information here, but I am a k8s beginner and I am completely lost on this.
There is an issue I experienced with coredns pods stuck in a pending mode when setting up your own cluster; which I resolve by adding pod network.
Looks like because there is no Network Addon installed, the nodes are taint as not-ready. Installing the Addon would remove the taints and the Pods will be able to schedule. In my case adding flannel fixed the issue.
EDIT: There is a note about this in the official k8s documentation - Create cluster with kubeadm:
The network must be deployed before any applications. Also, CoreDNS
will not start up before a network is installed. kubeadm only
supports Container Network Interface (CNI) based networks (and does
not support kubenet).
Actually it is the opposite of a deep or serious issue. This is a trivial issue. Always you see a pod stuck on Pending state, it means the scheduler is having a hard time to schedule the pod; mostly because there are no enough resources on the node.
In your case it is a taint that has the node, and your pod doesn't have the toleration. What you have to do is to describe the node and get the taint:
kubectl describe node | grep -i taints
Note: you might have more then one taint. So you might want to do kubectl describe no NODE since with grep you will only see one taint.
Once you get the taint, that will be something like hello=world:NoSchedule; which means key=value:effect, you will have to add a toleration section in your Deployment. This is an example Deployment so you can see how it should look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 10
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
tolerations:
- effect: NoExecute #NoSchedule, PreferNoSchedule
key: node
operator: Equal
value: not-ready
tolerationSeconds: 3600
As you can see there is the toleration section in the yaml. So, if I would have a node with node=not-ready:NoExecute taint, no pod would be able to be scheduled on that node, unless would have this toleration.
Also you can remove the taint, if you don need it. To remove a taint you would describe the node, get the key of the taint and do:
kubectl taint node NODE key-
Hope it makes sense. Just add this section to your deployment, and it will work.
Set up the flannel network tool.
Running commands:
$ sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
I have a kubernetes cluster with 5 nodes. When I add a simple nginx pod it will be scheduled to one of the nodes but it will not start up. It will not even pull the image.
This is the nginx.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
when I describe the pod there is one event: Successfully assigned busybox to up02 When I log in to the up02 and check to see if there are any images pulled I see it didn't get pulled so I pulled it manually (I thought maybe it needs some kick start ;) )
The pod will allways stay in the Container creating state. It's not only with this pod, the problem is with any pod I try to add.
There are some pods running on the machine which is necessary for Kubernetes to operate:
up#up01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 0/1 ContainerCreating 0 11m
default nginx 0/1 ContainerCreating 0 22m
kube-system dummy-2088944543-n1cd5 1/1 Running 0 5d
kube-system etcd-up01 1/1 Running 0 5d
kube-system kube-apiserver-up01 1/1 Running 0 5d
kube-system kube-controller-manager-up01 1/1 Running 0 5d
kube-system kube-discovery-1769846148-xfpls 1/1 Running 0 5d
kube-system kube-dns-2924299975-5rzz8 4/4 Running 0 5d
kube-system kube-proxy-17bpl 1/1 Running 2 3d
kube-system kube-proxy-3pk63 1/1 Running 0 3d
kube-system kube-proxy-h3wrj 1/1 Running 0 5d
kube-system kube-proxy-wzqv4 1/1 Running 0 3d
kube-system kube-proxy-z3xxx 1/1 Running 0 3d
kube-system kube-scheduler-up01 1/1 Running 0 5d
kube-system kubernetes-dashboard-3203831700-3xfbd 1/1 Running 0 5d
kube-system weave-net-6c0nr 2/2 Running 0 3d
kube-system weave-net-dchhf 2/2 Running 0 5d
kube-system weave-net-hshvg 2/2 Running 4 3d
kube-system weave-net-n684c 2/2 Running 1 3d
kube-system weave-net-r5319 2/2 Running 0 3d
You can do
kubectl describe pods <pod>
to get more info on what's happening.
Can you recreate the nginx pod again in namespace kube-system?
kubectl create --namespace kube-system -f nginx.yaml
this should fix your problem.
Second, do you have proxy in your environment, take a look as well.
make sure that your namespace and service account information is correct. if you've configured your services or deployments to use a namespace or service account, that namespace needs to exist.
If you configured it to use a non - default service account then that has to exist as well, and the service account should be created after the namespace.
you shouldn't be necessarily using the kube system namespace. namespaces exist so there can be more than one of them and control the flow of traffic inside of a cluster.
you should also set potentially set permissions for your namespace. read this here.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions