kube-controller-manager & kube-apiserver questions for a kubeadm created cluster - kubernetes

I have created a k8s cluster using kubeadm and have a couple of questions about the kube-controller-manager and kuber-apiserver components.
When created using kubeadm, those components are started as pods, not systemd daemons. If I kill any of those pods, they are restarted, but who is restarting them? I haven't seen any replicacontroller nor deployment in charge of doing that.
What is the "right" way of updating their configuration? Imagine I want to change the authorization-mode of the api server. In the master node we can find a /etc/kubernetes/manifests folder with a kube-apiserver.yaml file. Are we supposed to change this file and just kill the pod so that it restarts with the new config?

The feature you've described is called Static Pods. Here is a part of documentation that describes their behaviour.
Static pods are managed directly by kubelet daemon on a specific node,
without the API server observing it. It does not have an associated
replication controller, and kubelet daemon itself watches it and
restarts it when it crashes. There is no health check. Static pods are
always bound to one kubelet daemon and always run on the same node
with it.
Kubelet automatically tries to create a mirror pod on the Kubernetes
API server for each static pod. This means that the pods are visible
on the API server but cannot be controlled from there.
The configuration files are just standard pod definitions in json or
yaml format in a specific directory. Use kubelet
--pod-manifest-path=<the directory> to start kubelet daemon, which periodically scans the directory and creates/deletes static pods as
yaml/json files appear/disappear there. Note that kubelet will ignore
files starting with dots when scanning the specified directory.
When kubelet starts, it automatically starts all pods defined in
directory specified in --pod-manifest-path= or --manifest-url=
arguments, i.e. our static-web.
Usually, those manifests are stored in the directory /etc/kubernetes/manifests.
If you put any changes to any of those manifests, that resource will be adjusted just like if you would run kubectl apply -f something.yaml command.

Related

Checking for particular pod status before each initialisation of another pod

Assume deployment like that:
Deployment contains two types of pods Config and App
Each App pod to start needs to have access to Config pod
There is always only one Config pod
Already launched App pods can work without access to Config pod service
Situation I would like to manage:
Node containing some of App pods and Config pod going down for any reason
On another Node first starts Config pod
After Config pod is successfully started App pods are launched
Already read about:
InitContainers - couldn't find an information if Config pod would be of type Init if in above situation it would rerun - I think not
StatueFullSet - I cannot find a way how this could help me in that situation
From my perspective I was thinking about a loop for App pods before running target application, that would wait for Config pod to come up and in case of unavailability after timeout force them to fail. But I'm not sure if that is best practice, would like better to handle this with Kubernetes configuration rather that with such script.
You would use either code in your app or an initContainer to block until a config pod is available. Combine this with a readinessProbe that checks if the app is up. Doing the block-and-retry loop in your own code is a bit more work but recommended since you can more carefully control the behavior. This means that app pods can launch whenever, but they won't be marked as ready for traffic until the initialize.

Difficulty with different kubernetes pods run using kubetctl apply running same container images sharing directories

I am attempting to run two separate pods using the same container image on a cluster by applying a config file. Despite there being no shared or persistent volume when both pods are active the same directory on both pods is updated with created files from the other pod and write access changes suddenly. The container being used is the jupyter-docker-stacks jupyter/minimal-notebook image being pulled directly from dockerhub. These pods running this container is created by applying a manifest. The two pods have different labels and names. A service with a unique name is created for each pod for access.
Do resources for containers persist over time on a cluster like in docker containers? I cannot find something equivalent to a --rm flag to be used alongside kubectl apply.
Thanks
If you want to delete the pod after the job is completed, you might want to apply job instead of pod. The idea of job in k8s is to launch a pod and do the job, and then the pod get stopped. For more info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
$ kubectl apply -f <fileName> will create or make some changes in the pod. If you want to delete pod using apply you must use $ kubectl delete -f <fileName>
About sharing, if you have 2 separate manifest you can specify volumeMounts for each container. For more information please read the documentation depends on your needs.
Also as #Kaizhe Huang advised you can use Job if you want to execute something one time or try initContainers if you want to install something in POD before main container will be run. More about initContainers here.
You could check the dockerfile of your image. See if there are 'VOLUME' claimed. If have, maybe they share the same volume on host. Not sure, but you could check.

How to add flag to Kubernetes controller manager

I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it.
Thank you.
You can check /etc/kubernetes/manifests dir on your master nodes.
This dir would contain yaml files for master components.
These are also known as static pods.
More Info : https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Update these files and you would be able to see your changes as kubelet should restart the pod on file change.
As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.

kubernetes pods are restarting with new ID

The pods i am working with are being managed by kubernetes. When I use the docker restart command to restart a pod, sometimes the pod gets a new id and sometimes the old one. When the pod gets a new id, its state first goes friom running ->error->crashloopbackoff. Can anyone please tell me why is this happening. Also how frequently does kubernetes does the health check
Kubernetes currently does not use the docker restart command for many reasons (e.g., preserving the logs of older containers). Kubelet, the daemon on the node, creates a new container if the existing container terminated. In any case, users should not perform container lifecycle operations (e.g., stop, restart) on kubernetes-managed containers directly using docker, as it could cause unexpected behaviors.
EDIT: If you want kubernetes to restart your container automatically, set RestartPolicy in your pod spec to "Always" or "OnFailure". For more details, see http://kubernetes.io/docs/user-guide/pod-states/

Reload Kubernetes ReplicationController to get newly created Service

Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services?
Example:
I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them
kubectl delete pod <your-pod-name>
if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome.
So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need.
kubectl scale --replicas=0 rc <your-rc>
kubectl scale --replicas=<n> rc <your-rc>
By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
If you want the same pod to have the new service, the clean answer is no. You could (I strongly suggest not to do this) run kubectl exec <pod-name> -c <containers> -- export <service env var name>=<service env var value>. But your best bet is to run kubectl delete <pod-name> and let your replication controller handle the work.
I've ran into a similar issue for services being ran outside of kubernetes, say a DB for instance, to address this I've been creating this https://github.com/cpg1111/kubongo which updates the service's endpoint without deleting the pods. That same idea can also be applied to other pods in kubernetes to automate the service update. Basically it watches a specific service, and when it's IP changes for whatever reason it updates all the pods without deleting them. This does use the same code as kubectl exec however it is automated, sanitizes input and ensures the export is executed on all pods.
What do you mean with 'reapply'?
The pods to which the services point are generally selected based on labels.In other words, you can add / remove labels from the pods to include / exclude them from a service.
Read here for more information about defining services: http://kubernetes.io/v1.1/docs/user-guide/services.html#defining-a-service
And here for more information about labels: http://kubernetes.io/v1.1/docs/user-guide/labels.html
Hope it helps!