K8S Audit changes are not being saved in master after restart - kubernetes

i created K8S cluster (unmanaged) in google cloud.
i added the following changes in the master:
--audit-dynamic-configuration --feature-gates=DynamicAuditing=true --runtime-config=auditregistration.k8s.io/v1alpha1=true
as written in :
https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
and everything is working as expected.
but after restart these settings are not being saved.
anyone encounter this problem?

Assuming you are using kubeadm, this is how you apply flags to the apiserver (all of these changes should be done on the master node)
Edit the following file: /etc/kubernetes/manifests/kube-apiserver.yaml and add these flags to the list of flags:
--audit-dynamic-configuration
--feature-gates=DynamicAuditing=true
--runtime-config=auditregistration.k8s.io/v1alpha1=true
Note that every change done to the kube-apiserver manifest causes the apiserver to restart.
Once it is up and running execute the following command to verify flags are all set and server is up and running: ps -ef | grep kube-apiserver. The output should contain the flags you applied.
In case of issues, see the kube-apiserver logs placed at /var/log/containers/ and search for files beginning with kube-apiserver.

Related

Issue in setting up KUBECTL on Windows 10 Home

I am trying to learn Kubernetes and so I installed Minikube on my local Windows 10 Home machine and then I tried installing the kubectl. However so far I have been unsuccessful in getting it right.
So this what I have done so far:
Downloaded the kubectl.exe file from https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe
Then I added the path of this exe in the path environment variable as shown below:
However this didn't work when I executed kubectl version on the command prompt or even on pwoershell (in admin mode)
Next I tried using the curl command as given in the docs - https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-windows
However that too didn't work as shown below:
Upon searching for answers to fix the issue, I stumbled upon this StackOverfow question which explained how to create a .kube config folder because it didn't exist on my local machine. I followed the instructions, but that too failed.
So right now I am completely out of ideas and not sure whats the issue here. FYI, I was able to install everything in a breeze on my Mac, however Windows is just acting crazy.
Any help would be really helpful.
As user #paltaa mentioned:
did you do a minikube start ? – paltaa 2 days ago
The fact that you did not start the minikube is the most probable cause why you are getting this error.
Additionally this error message shows when the minikube is stopped as stopping will change the current-context inside the config file.
There is no need to create a config file inside of a .kube directory as the minikube start will create appropriate files and directories for you automatically.
If you run minikube start command successfully you should get below message at the end of configuration process which will indicate that the kubectl is set for minikube automatically.
Done! kubectl is not configured to use "minikube"
Additionally if you invoke command $ kubectl config you will get more information how kubectl is looking for configuration files:
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
Please take a special look on part:
Otherwise, ${HOME}/.kube/config is used
Even if you do not set the KUBECONFIG environment variable kubectl will default to $USER_DIRECTORY (for example C:\Users\yoda\.
If for some reason your cluster is running and files got deleted/corrupted you can:
minikube stop
minikube start
which will recreate a .kube/config
Steps for running minikube on Windows in this case could be:
Download and install Kubernetes.io: Install minikube using an installer executable
Download, install and configure a Hypervisor (for example Virtualbox)
Download kubectl
OPTIONAL: Add the kubectl directory to Windows environment variables
Run from command line or powershell from current user: $ minikube start --vm-driver=virtualbox
Wait for configuration to finish and invoke command like $ kubectl get nodes.

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Changing the CPU Manager Policy in Kubernetes

I'm trying to change the CPU Manager Policy for a Kubernetes cluster that I manage, as described here however, I've run into numerous issues while doing so.
The cluster is running in DigitalOcean and here is what I've tried so far.
1. Since the article mentions that --cpu-manager-policy is a kubelet option I assume that I cannot change it via the API Server and have to change it manually on each node. (Is this assumption BTW?)
2. I ssh into one of the nodes (droplets in DigitalOcean lingo) and run kubelet --cpu-manager-policy=static command as described in the kubelet CLI reference here. It gives me the message Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
3. So I check the file pointed at by the --config flag by running ps aux | grep kubelet and find that its /etc/kubernetes/kubelet.conf.
4. I edit the file and add a line cpuManagerPolicy: static to it, and also kubeReserved and systemReserved because they become required fields if specifying cpuManagerPolicy.
5. Then I kill the process that was running the process and restart it. A couple other things showed up (delete this file and drain the node etc) that I was able to get through and was able to restart the kubelet ultimately
I'm a little lost about the following things
How do I need to do this for all nodes? My cluster has 12 of them and doing all of these steps for each seems very inefficient.
Is there any way I can set these params from the globally i.e. cluster-wide rather than doing node by node?
How can I even confirm that what I did actually changed the CPU Manager policy?
One issue with Dynamic Configuration is that in case the node fails to restart, the API does not give a reasonable response back that tells you what you did wrong, you'll have to ssh into the node and tail the kubelet logs. Plus, you have to ssh into every node and set the --dynamic-config-dir flag anyways.
The folllowing worked best for me
SSH into the node. Edit
vim /etc/systemd/system/kubelet.service
Add the following lines
--cpu-manager-policy=static \
--kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
--system-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
We need to set the --kube-reserved and --system-reserved flags because they're prerequisties to setting the --cpu-manager-policy flag
Then drain the node and delete the following folder
rm -rf /var/lib/kubelet/cpu_manager_state
Restart the kubelet
sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl start kubelet
uncordon the node and check the policy. This assumes that you're running kubectl proxy on port 8001.
curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | grep cpuManager
If you use a newer k8s version and kubelet is configured by a kubelet configuration file, eg:config.yml. you can just follow the same steps of #satnam mentioned above. but instead of adding --kube-reserved --system-reserved --cpu-manager-policy, you need to add kubeReserved systemReserved cpuManagerPolicy in your config.yml. for example:
systemReserved:
cpu: "1"
memory: "100m"
kubeReserved:
cpu: "1"
memory: "100m"
cpuManagerPolicy: "static"
Meanwhile, be sure your CPUManager is enabled.
It might not be the global way of doing stuff, but I think it will be much more comfortable than what you are currently doing.
First you need to run
kubectl proxy --port=8001 &
Download the configuration:
NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME}
Edit it accordingly, and push the configuration to control plane. You will see a valid response if everything went well. Then you will have to edit the configuration so the Node starts to use the new ConfigMap. There are many more possibilities, for example you can go back to default settings if anything goes wrong.
This process is described with all the details in this documentation section.
Hope this helps.

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.

kubernetes pods spawn across all servers but kubectl only shows 1 running and 1 pending

I have new setup of Kubernetes and I created replication with 2. However what I see when I do " kubectl get pods' is that one is running another is "pending". Yet when I go to my 7 test nodes and do docker ps I see that all of them are running.
What I think is happening is that I had to change the default insecure port from 8080 to 7080 (the docker app actually runs on 8080), however I don't know how to tell if I am right, or where else to look.
Along the same vein, is there any way to setup config for kubectl where I can specify the port. Doing kubectl --server="" is a bit annoying (yes I know I can alias this).
If you changed the API port, did you also update the nodes to point them at the new port?
For the kubectl --server=... question, you can use kubectl config set-cluster to set cluster info in your ~/.kube/config file to avoid having to use --server all the time. See the following docs for details:
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-cluster.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-context.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_use-context.html