Kubernetes nfs volumes - kubernetes

I'm referring to this [1] document about creating nfs mounts on kubernetes. But I have couple of issues to be clarified.
What is meant by privileged containers?
How can I set allow-privileged true of a kubernetes setup installed on bare metal ubuntu machine? I setup it using kebe-up.sh script.
[1] http://kubernetes.io/v1.1/examples/nfs/index.html

https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/pods.md#privileged-mode-for-pod-containers
That refers to the corresponding field in the yaml: https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2089
And it should be on per the document in 1.1 (https://github.com/kubernetes/kubernetes/blob/b9cfab87e33ea649bdd13a1bd243c502d76e5d22/cluster/saltbase/pillar/privilege.sls#L2, you can confirm by creating the pod and running kubectl describe like the docs says).

Related

gpu worker node unable to join cluster

I've a EKS setup (v1.16) with 2 ASG: one for compute ("c5.9xlarge") and the other gpu ("p3.2xlarge").
Both are configured as Spot and set with desiredCapacity 0.
K8S CA works as expected and scale out each ASG when necessary, the issue is that the newly created gpu instance is not recognized by the master and running kubectl get nodes emits nothing.
I can see that the ec2 instance was in Running state and also I could ssh the machine.
I double checked the the labels and tags and compared them to the "compute".
Both are configured almost similarly, the only difference is that the gpu nodegroup has few additional tags.
Since I'm using eksctl tool (v.0.35.0) and the compute nodeGroup vs. gpu nodeGroup is basically copy&paste, I can't figured out what could be the problem.
UPDATE:
ssh the instance I could see the following error (/var/log/messages)
failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
and the kubelet service crashed.
would it possible the my GPU uses wrong AMI (amazon-eks-gpu-node-1.18-v20201211)?
As a simple you can use this preBootstrapCommands in eksctl yaml config file:
- name: test-node-group
preBootstrapCommands:
- "sed -i 's/cgroupDriver:.*/cgroupDriver: cgroupfs/' /etc/eksctl/kubelet.yaml"
There is some issue with EKS 1.16, even the graviton processors machine won't join the cluster. To fix it first you try upgrading your CNI version. Please refer the documentation here:
https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html
And if that doesn't work, then upgrade your EKS version to the latest available version then should work.
I've found out the issue. It seems to be mis-alignment between eksctl (v0.35.0) and the AL2-GPU AMI.
AWS team change the control group in docker to be "systemd" instead of "cgroup" (github) while the eksctl tool I used didn't absorb the changes.
A temporary solution is to edit the /etc/eksctl/kubelet.yaml file using preBootstrapCommands

Accessing a shared directory inside GKE from an external world

I am newbie to Google Cloud and GKE and i am trying to setup NFS Persistent Volumes with Kubernetes on GKE with the help of following link :
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
I had followed the instructions and i was able to achieve the desired results as mentioned in the blog but i need to access the shared folder (/uploads) from an external world so can someone help me to achieve it or any pointers or any suggestions to achieve the same
I have followed the doc and implemented the steps on my test GKE cluster like you. Just I have one observation about the current API version for deployment. We need to use apiVersion: apps/v1 instead of apiVersion: extensions/v1beta1. Then I test with a busybox pod to mount the volume and the test was successful.
Then I exposed the service “nfs-server” as service type “Load Balancer” like below
and found the external load balancer endpoints like (LB_Public_Ip):111 in Services & Ingress tab. I allowed ports 111, 2049, 20048 in firewall. After that I took a redhat based VM in the GCP project and installed “sudo dnf install nfs-utils -y”. Then you may use the below command to see the nfs exports list. Then you can mount it as expected.
-sudo showmount -e LB_Public_IP
Please have a look on the below sample configuration and you may follow the GCP doc

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

Can't run Kubernetes dashboard after installing Kubernetes cluster on rancher/server

Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.

kubernetes create cluster with logging and monitoring for ubuntu

I'm setting up a kubernetes cluster on digitalocean ubuntu machines. I got the cluster up and running following this get started guide ubuntu. During the setup the ENABLE_NODE_LOGGING, ENABLE_CLUSTER_LOGGING and ENABLE_CLUSTER_DNS variables are set to true in the config-default.sh.
However there is no controller, services created for elasticsearch/kabana. I did have to run the deployAddon.sh manually for the skydns, do I need to do the same for logging and monitoring ? or am I missing something in the default configuration.
By default the logging and monitoring services are not in the default namespace.
You should be able to see if the services are running with kubectl cluster-info.
To look at the individual services/controllers, specify the kube-system namespace:
kubectl get service --namespace=kube-system
By default, logging and monitor is not enabled if you are installing kubernetes on ubuntu machines. It looks like someone has copied the config-default.sh script from some other folder, hence the variables ENABLE_NODE_LOGGING and ENABLE_CLUSTER_LOGGING are copied but are not used to bring up the relevant logging deployments and services.
As #Jon Mumm said, kubectl cluster-info gives you the info. But if you want to install the logging service, go to
kubernetes/cluster/addons/fluentd-elasticsearch
and run
kubectl create -f es-controller.yaml -f es-service.yaml -f kibana-controller.yaml -f kibana-service.yaml
with right setup. Change the yaml files to suit your configuration and ensure kubectl is in your path.
Update 1: This will bring up kibana and logstash services