I try, I try, but Rancher 2.1 fails to deploy the "mongo-replicaset" Catalog App, with Local Persistent Volumes configured.
How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2.
I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
Note: Deployment without Local Persistent Volumes succeed.
Note: Deployment with Local Persistent Volume and with the "mongo" image succeed (without replicaset version).
Note: Deployment with both mongo-replicaset and with Local Persistent Volume fails.
Step A - Cluster
Create a rancher instance, and:
Add three nodes: a worker, a worker etcd, a worker control plane
Add a label on each node: name one, name two and name three for node Affinity
Step B - Storage class
Create a storage class with these parameters:
volumeBindingMode : WaitForFirstConsumer saw here
name : local-storage
Step C - Persistent Volumes
Add 3 persistent volumes like this:
type : local node path
Access Mode: Single Node RW, 12Gi
storage class: local-storage
Node Affinity: name one (two for second volume, three for third volume)
Step D - Mongo-replicaset Deployment
From catalog, select Mongo-replicaset and configure it like that:
replicaSetName: rs0
persistentVolume.enabled: true
persistentVolume.size: 12Gi
persistentVolume.storageClass: local-storage
Result
After doing ABCD steps, the newly created mongo-replicaset app stay infinitely in "Initializing" state.
The associated mongo workload contain only one pod, instead of three. And this pod has two 'crashed' containers, bootstrap and mongo-replicaset.
Logs
This is the output from the 4 containers of the only running pod. There is no error, no problem.
I can't figure out what's wrong with this configuration, and I don't have any tools or techniques to analyze the problem. Detailed configuration can be found here. Please ask me for more commands results.
Thanks you
All this configuration is correct.
It's missing a detail since Rancher is a containerized deployment of kubernetes.
Kubelets are deployed on each node in docker containers. They don't access to OS local folders.
It's needed to add a volume binding for the kubelets, like that K8s will be able to create the mongo pod with this same binding.
In rancher:
Edit the cluster yaml (Cluster > Edit > Edit as Yaml)
Add the following entry under "services" node:
kubelet:
extra_binds:
- "/mongo:/mongo:rshared"
Related
I just created the following PersistantVolume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: sql-pv
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/sqldata"
Then I SSH the Node and traversed to the /var/lib. But I cannot see the sqldata directory created anywhere in it.
Where is the real directory created?
I created a POD that mounts this volume to a path inside the container. When I SSH the container, I can see the file in the mount path. Where are these files stored?
You have setup your cluster on Google Kubernetes Engine, that means nodes are virtual machine instances on GCP. You've probably been connecting to the cluster using the Kubernetes Engine dashboard and Connect to the cluster option. It does not SSH you to any of the node, it just starting GCP Cloud Shell terminal instance with following command like:
gcloud container clusters get-credentials {your-cluster} --zone {your-zone} --project {your-project-name}
That command is configuring kubectl agent on GCP Cloud Shell by setting proper cluster name, certificates etc. in ~/.kube/config file so you have access to the cluster (by communicating with the cluster endpoint), but you are not SSHed to any node. That's why you can't access the path defined in the hostPath.
To find a hostPath directory, you need to:
find on which node is the pod
SSH into this node
Finding a node:
Run following kubectl get pod {pod-name} with -o wide flag command - change {pod-name} to your pod name
user#cloudshell:~ (project)$ kubectl get pod task-pv-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
task-pv-pod 1/1 Running 0 53m xx.xx.x.xxx gke-test-v-1-21-default-pool-82dbc10b-8mvx <none> <none>
SSH to the node:
Run following gcloud compute ssh {cluster-name} command - change {cluster-name} to node name from the previous command:
user#cloudshell:~ (project)$ gcloud compute ssh gke-test-v-1-21-default-pool-82dbc10b-8mvx
Welcome to Kubernetes v1.21.3-gke.2001!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/home/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release-gke/release/v1.21.3-gke.2001/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.21.3-gke.2001
For Kubernetes copyright and licensing information, see:
/home/kubernetes/LICENSES
user#gke-test-v-1-21-default-pool-82dbc10b-8mvx ~ $
Now there will be a hostPath directory (in your case /var/lib/sqldata), there will also be files if pod created some.
Avoid hostPath if possible
It's not recommended using hostPath. As mentioned in the comments, it will cause issues when a pod will be created on the different node (but you have a single node cluster) but it also presents many security risks:
Warning:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
In your case it's much better to use the gcePersistentDiskvolume type - check this article.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to k8s and i want to have few clarification on below questions, please let me know ur thoughts
Does persistent volumes claims are confined to single namespace ?
A PersistentVolumeClaim (kubectl get pvc) is confined to a Namespace. A PersistentVolume (kubectl get pv) is defined on cluster-level. Each namespace can access the PV which are not "Bound"
You have to install one CNI (Container Network Interface) like calico or flannel. There you will specify a PodNetworkCIDR e.q. 10.20.0.0/16. Then the IPAdressManagement of e.q. Calico will split that network into some smaller networks. Each Kubernetes Node get's his own Network from the 10.20.0.0/16 Network.
If you mean the Kubernetes "Infrastructure" it's mostly deployed to kube-system. To deploy you're own stuff like Monitoring, Logging, Storage you can create your own Namespaces
No not all Objects are bound to a Namespace. With kubectl api-resources you will get an overview.
There are a lot of storagetype (https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes). But if you not specify any volumes (PV) which are persistant, your files which are written in a container are gone if the container restarts.
A Pod is the smallest Unit which can be addressed. A Pod could contain multiple container.
A Deployment describes the state of the Pod. It's recommended to use a Deployment. You can start a Pod without a Deployment, but if you delete the Pod it will not be restarted by the Kubelet. (The following command creates a Pod without a Deployment: kubectl run nginx --image=nginx --port=80 --restart=Never). For Storage, you would specify the PVC in the Deployment. But you have to create that PVC before.(https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)
Exactly, For e.q. a MySQL you would use recreate, for httpd you would use rolling.
What do you mean with local proxy For local development you can have a look at minikube?
No, a Pod has only 1 IP.
Does persistent volumes claims are confined to single namespace ?
Persistent Volume Claims(PVC) is bound to namespace. PVC must exist in the same
namespace as the Pod using the claim
How many pod networks can we have per cluster ?
Default maximum of 110 Pods per node, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes.
Which namespace contains the infrastructure pods ?
Generally kube-system
Does all objects are restricted to single namespace ?
No, not all objects are restricted to single namespace. You can create objects in different namespaces.
Does container offer a persistent storage that outlives the container ?
If you use PV/PVC then your storage must be persistent
What is the smallest object or unit(pod or container or replicaset or deployment) we can work with in k8s?
A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers.
does a deployment use a persistent volume or a persistent volume claim ?
You need to use PVC in deployment, in volume section like following
volumes:
- name: data
persistentVolumeClaim:
claimName: <pvc name>
With deployment config spec which strategy(recreate or rollingupdate) allows us to control the updates to pod ?
Recreate will terminate all the running instances then recreate them with the newer version. Rolling update follows defined strategy of how many instance will be down and recreate at a time.
How can we start local proxy which is useful for development and testing ?
You can use port-forwarding
Pod can have multiple ip address?
single pod have single ip address. details here
I try, I try, but Rancher 2.1 fails to deploy the "mongo-replicaset" Catalog App, with Local Persistent Volumes configured.
How to correctly deploy a mongo-replicaset with Local Storage Volume? Any debugging techniques appreciated since I am new to rancher 2.
I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
Note: Deployment without Local Persistent Volumes succeed.
Note: Deployment with Local Persistent Volume and with the "mongo" image succeed (without replicaset version).
Note: Deployment with both mongo-replicaset and with Local Persistent Volume fails.
Step A - Cluster
Create a rancher instance, and:
Add three nodes: a worker, a worker etcd, a worker control plane
Add a label on each node: name one, name two and name three for node Affinity
Step B - Storage class
Create a storage class with these parameters:
volumeBindingMode : WaitForFirstConsumer saw here
name : local-storage
Step C - Persistent Volumes
Add 3 persistent volumes like this:
type : local node path
Access Mode: Single Node RW, 12Gi
storage class: local-storage
Node Affinity: name one (two for second volume, three for third volume)
Step D - Mongo-replicaset Deployment
From catalog, select Mongo-replicaset and configure it like that:
replicaSetName: rs0
persistentVolume.enabled: true
persistentVolume.size: 12Gi
persistentVolume.storageClass: local-storage
Result
After doing ABCD steps, the newly created mongo-replicaset app stay infinitely in "Initializing" state.
The associated mongo workload contain only one pod, instead of three. And this pod has two 'crashed' containers, bootstrap and mongo-replicaset.
Logs
This is the output from the 4 containers of the only running pod. There is no error, no problem.
I can't figure out what's wrong with this configuration, and I don't have any tools or techniques to analyze the problem. Detailed configuration can be found here. Please ask me for more commands results.
Thanks you
All this configuration is correct.
It's missing a detail since Rancher is a containerized deployment of kubernetes.
Kubelets are deployed on each node in docker containers. They don't access to OS local folders.
It's needed to add a volume binding for the kubelets, like that K8s will be able to create the mongo pod with this same binding.
In rancher:
Edit the cluster yaml (Cluster > Edit > Edit as Yaml)
Add the following entry under "services" node:
kubelet:
extra_binds:
- "/mongo:/mongo:rshared"
Suppose I bootstrap a single master node with kubelet v1.10.3 in OpenStack cloud and I would like to have a "self-hosted" single etcd node for k8s necessities as a pod.
Before starting kube-apiserver component you need a working etcd instance, but of course you can't just perform kubectl apply -f or put a manifest to addon-manager folder because cluster is not ready at all.
There is a way to start pods by kubelet without having a ready apiserver. It is called static pods (yaml Pod definitions usually located at /etc/kubernetes/manifests/). And it is the way I start "system" pods like apiserver, scheduler, controller-manager and etcd itself. Previously I just mounted a directory from node to persist etcd data, but now I would like to use OpenStack blockstorage resource. And here is the question: how can I attach, mount and use OpenStack cinder volume to persist etcd data from static pod?
As I learned today there are at least 3 ways to attach OpenStack volumes:
CSI OpenStack cinder driver which is pretty much new way of managing volumes. And it won't fit my requirements, because in static pods manifests I can only declare Pods and not other resources like PVC/PV while CSI docs say:
The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
before-csi way to attach volumes is: FlexVolume.
FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node (and in some cases master).
Ok, I added those binaries to my node (using this DS as a reference), added volume to pod manifest like this:
volumes:
- name: test
flexVolume:
driver: "cinder.io/cinder-flex-volume-driver"
fsType: "ext4"
options:
volumeID: "$VOLUME_ID"
cinderConfig: "/etc/kubernetes/cloud-config"
and got the following error from kubelet logs:
driver-call.go:258] mount command failed, status: Failure, reason: Volume 2c21311b-7329-4cf4-8230-f3ce2f23cf1a is not available
which is weird because I am sure this Cinder volume is already attached to my CoreOS compute instance.
and the last way to mount volumes I know is cinder in-tree support which should work since at least k8s 1.5 and does not have any special requirements besides --cloud-provider=openstack and --cloud-config kubelet options.
The yaml manifest part for declaring volume for static pod looks like this:
volumes:
- name: html-volume
cinder:
# Enter the volume ID below
volumeID: "$VOLUME_ID"
fsType: ext4
Unfortunately when I try this method I get the following error from kubelet:
Volume has not been added to the list of VolumesInUse in the node's volume status for volume.
Do not know what it means but sounds like the node status could not be updated (of course, there is no etcd and apiserver yet). Sad, it was the most promising option for me.
Are there any other ways to attach OpenStack cinder volume to a static pod relying on kubelet only (when cluster is actually not ready)? Any ideas on what cloud I miss of got above errors?
Message Volume has not been added to the list of VolumesInUse in the node's volume status for volume. says that attach/detach operations for that node are delegated to controller-manager only. Kubelet waits for attachment being made by controller but volume doesn't reach appropriate state because controller isn't up yet.
The solution is to set kubelet flag --enable-controller-attach-detach=false to let kubelet attach, mount and so on. This flag is set to true by default because of the following reasons
If a node is lost, volumes that were attached to it can be detached
by the controller and reattached elsewhere.
Credentials for attaching and detaching do not need to be made
present on every node, improving security.
In your case setting of this flag to false is reasonable as this is the only way to achieve what you want.
Using Kubernetes 1.7.0, the intention here is to be able to deploy MySQL / MongoDB / etc, and use local disk as storage backing; while webheads, and processing pods can autoscale by Kubernetes. To these aims, I've
Set up & deployed the Local Persistent Storage provisioner to automatically provision locally attached disk to pods' Persitent Volume Claims.
Manually created a Persistent Volume Claim, which succeeds, and the local volume is attached
Attempted to deploy MariaDB via helm by
helm install --name mysql --set persistence.storageClass=default stable/mariadb
This appears to succeed; but by going to the dashboard, I get
Storage node affinity check failed for volume "local-pv-8ef6e2af" : NodeSelectorTerm [{Key:kubernetes.io/hostname Operator:In Values:[kubemaster]}] does not match node labels
I suspect this might be due to helm's charts not including nodeaffinity. Other than updating each chart manually, is there a way to tell helm to deploy to the same pod where the provisioner has the volume?
Unfortunately, no. You will need to specify node affinity so that the Pod lands on the node where the local storage is located. See the docs on Node Affinity to know what do add to the helm chart.
I suspect it would look something like the following in your case.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubemaster
As an aside, this is something that will happen, not just at the node level, but at the zone level for cloud environments like AWS and GCP as well. In those environments, persistent disks are zonal and will require you to set NodeAffinity so that the Pods land in the zone with the persistent disk when deploying to a multi-zone cluster.
Also as an aside, It looks like your are deploying to the Kubernetes master? If so that may not be advisable since MySQL could potentially affect the master's operation.