How to set custom dir to generate certs in minikube - kubernetes

Using kubeadm we can use --cert-dir to use the custom dir to save and store the certificates.
--cert-dir The path where to save and store the certificates. (default "/etc/kubernetes/pki")
How can we set the custom dir in minikube?

Due to the fact that kubeadm is the main bootstrapper for minikube implementation by default, thus it can be possible to pass to minikube special kubeadm command line parameters via --extra-config flag.
The target configuration with desired effect to change certificates inventory directory via --cert-dir flag may looks like:
$ sudo minikube start --vm-driver=none --extra-config=kubeadm.cert-dir="/$CERTS_PATH"
However , since I've launched the above code, I've received the following error:
😄 minikube v1.2.0 on linux (amd64)
💡 Sorry, the kubeadm.cert-dir parameter is currently not supported
by --extra-config
From minikube help guide:
Valid kubeadm parameters: ignore-preflight-errors, dry-run,
kubeconfig, kubeconfig-dir, node-name, cri-socket,
experimental-upload-certs, certificate-key, rootfs, pod-network-cidr
Which actually breaks my plans to get on hand solution as apparently I didn't find any other methods to afford it.
Will go further and share my progress though...

Related

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Issue in setting up KUBECTL on Windows 10 Home

I am trying to learn Kubernetes and so I installed Minikube on my local Windows 10 Home machine and then I tried installing the kubectl. However so far I have been unsuccessful in getting it right.
So this what I have done so far:
Downloaded the kubectl.exe file from https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe
Then I added the path of this exe in the path environment variable as shown below:
However this didn't work when I executed kubectl version on the command prompt or even on pwoershell (in admin mode)
Next I tried using the curl command as given in the docs - https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-windows
However that too didn't work as shown below:
Upon searching for answers to fix the issue, I stumbled upon this StackOverfow question which explained how to create a .kube config folder because it didn't exist on my local machine. I followed the instructions, but that too failed.
So right now I am completely out of ideas and not sure whats the issue here. FYI, I was able to install everything in a breeze on my Mac, however Windows is just acting crazy.
Any help would be really helpful.
As user #paltaa mentioned:
did you do a minikube start ? – paltaa 2 days ago
The fact that you did not start the minikube is the most probable cause why you are getting this error.
Additionally this error message shows when the minikube is stopped as stopping will change the current-context inside the config file.
There is no need to create a config file inside of a .kube directory as the minikube start will create appropriate files and directories for you automatically.
If you run minikube start command successfully you should get below message at the end of configuration process which will indicate that the kubectl is set for minikube automatically.
Done! kubectl is not configured to use "minikube"
Additionally if you invoke command $ kubectl config you will get more information how kubectl is looking for configuration files:
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
Please take a special look on part:
Otherwise, ${HOME}/.kube/config is used
Even if you do not set the KUBECONFIG environment variable kubectl will default to $USER_DIRECTORY (for example C:\Users\yoda\.
If for some reason your cluster is running and files got deleted/corrupted you can:
minikube stop
minikube start
which will recreate a .kube/config
Steps for running minikube on Windows in this case could be:
Download and install Kubernetes.io: Install minikube using an installer executable
Download, install and configure a Hypervisor (for example Virtualbox)
Download kubectl
OPTIONAL: Add the kubectl directory to Windows environment variables
Run from command line or powershell from current user: $ minikube start --vm-driver=virtualbox
Wait for configuration to finish and invoke command like $ kubectl get nodes.

Configure apiserver to use encryption config using minikube

I am trying to configure the kube-apiserver so that it uses encryption to configure secrets in my minikube cluster.
For that, I have followed the documentation on kubernetes.io but got stuck at step 3 that says
Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
I have discovered the option --extra-config on minikube start and have tried starting my setup using
minikube start --extra-config=apiserver.encryption-provider-config=encryptionConf.yaml
but naturally it doesn't work as encryptionConf.yaml is located in my local file system and not in the pod that's spun up by minikube. The error minikube log gives me is
error: error opening encryption provider configuration file "encryptionConf.yaml": open encryptionConf.yaml: no such file or directory
What is the best practice to get the encryption configuration file onto the kube-apiserver? Or is minikube perhaps the wrong tool to try out these kinds of things?
I found the solution myself in this GitHub issue where they have a similar issue for passing a configuration file. The comment that helped me was the slightly hacky solution that made use of the fact that the directory /var/lib/localkube/certs/ from the minikube VM is mounted into the apiserver.
So my final solution was to run
minikube mount .:/var/lib/minikube/certs/hack
where in the current directory I had my encryptionConf.yaml and then start minikube like so
minikube start --extra-config=apiserver.encryption-provider-config=/var/lib/minikube/certs/hack/encryptionConf.yaml
Based on drivers used some directories are mounted on to your minikube VM.
Check this link - https://kubernetes.io/docs/setup/minikube/#mounted-host-folders
Also ~/.minikube/files is also mounted into the VM at /files. So you can keep your files there and use that path for API server config
I had similar issues in windows regarding filepath location
since C:\Users\%USERNAME%\ is by default mounted in minikube VM
so i copied the files to Desktop folder( any folder under C drive )
minikube --extra-config=apiserver.encryption-provider-config=/c/Users/%USERNAME%/.../<file-name>
hope this is helpful for folks facing this issues on windows platform.

Changing the CPU Manager Policy in Kubernetes

I'm trying to change the CPU Manager Policy for a Kubernetes cluster that I manage, as described here however, I've run into numerous issues while doing so.
The cluster is running in DigitalOcean and here is what I've tried so far.
1. Since the article mentions that --cpu-manager-policy is a kubelet option I assume that I cannot change it via the API Server and have to change it manually on each node. (Is this assumption BTW?)
2. I ssh into one of the nodes (droplets in DigitalOcean lingo) and run kubelet --cpu-manager-policy=static command as described in the kubelet CLI reference here. It gives me the message Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
3. So I check the file pointed at by the --config flag by running ps aux | grep kubelet and find that its /etc/kubernetes/kubelet.conf.
4. I edit the file and add a line cpuManagerPolicy: static to it, and also kubeReserved and systemReserved because they become required fields if specifying cpuManagerPolicy.
5. Then I kill the process that was running the process and restart it. A couple other things showed up (delete this file and drain the node etc) that I was able to get through and was able to restart the kubelet ultimately
I'm a little lost about the following things
How do I need to do this for all nodes? My cluster has 12 of them and doing all of these steps for each seems very inefficient.
Is there any way I can set these params from the globally i.e. cluster-wide rather than doing node by node?
How can I even confirm that what I did actually changed the CPU Manager policy?
One issue with Dynamic Configuration is that in case the node fails to restart, the API does not give a reasonable response back that tells you what you did wrong, you'll have to ssh into the node and tail the kubelet logs. Plus, you have to ssh into every node and set the --dynamic-config-dir flag anyways.
The folllowing worked best for me
SSH into the node. Edit
vim /etc/systemd/system/kubelet.service
Add the following lines
--cpu-manager-policy=static \
--kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
--system-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
We need to set the --kube-reserved and --system-reserved flags because they're prerequisties to setting the --cpu-manager-policy flag
Then drain the node and delete the following folder
rm -rf /var/lib/kubelet/cpu_manager_state
Restart the kubelet
sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl start kubelet
uncordon the node and check the policy. This assumes that you're running kubectl proxy on port 8001.
curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | grep cpuManager
If you use a newer k8s version and kubelet is configured by a kubelet configuration file, eg:config.yml. you can just follow the same steps of #satnam mentioned above. but instead of adding --kube-reserved --system-reserved --cpu-manager-policy, you need to add kubeReserved systemReserved cpuManagerPolicy in your config.yml. for example:
systemReserved:
cpu: "1"
memory: "100m"
kubeReserved:
cpu: "1"
memory: "100m"
cpuManagerPolicy: "static"
Meanwhile, be sure your CPUManager is enabled.
It might not be the global way of doing stuff, but I think it will be much more comfortable than what you are currently doing.
First you need to run
kubectl proxy --port=8001 &
Download the configuration:
NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME}
Edit it accordingly, and push the configuration to control plane. You will see a valid response if everything went well. Then you will have to edit the configuration so the Node starts to use the new ConfigMap. There are many more possibilities, for example you can go back to default settings if anything goes wrong.
This process is described with all the details in this documentation section.
Hope this helps.

How does Kubectl connect to the master

I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem.
How is this implemented?
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver.
When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
In addition to what Robert said: the connection between your local CLI and the cluster is controlled through kubectl config set, see the docs.
The Getting started with Vagrant section of the docs should contain everything you need.