Data from host to kubernetes pod container - kubernetes

Im just starting this trip into cloud native and kubernetes(minikube for now), but im stuck because i cannot pass files and persist into pod containers.
Nginx,php-fpm and mariadb containers. Now, i just need to test app in kubernetes(docker-compose is running ok), that means as i was doing in docker-compose.
How can i mount volumes in this scenario?
Volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/lib/docker/volumes/sylius-standard-mysql-sylius-dev-data/_data/sylius
Claim File:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thank you for the guidance...

It depends on which Minikube driver you're using. Check out https://minikube.sigs.k8s.io/docs/handbook/mount/ for a full overview, but basically you have to make sure the host folder is shared with the guest VM, then hostPath volumes will work as normal. You may want to try Docker Desktop instead as it somewhat streamlines this process.

found on minikube github repo: feature request: mount host volumes into docker driver:
Currently we are mounting the /var directory as a docker volume, so
that's the current workaround.
i.e. use this host directory, for getting things into the container ?
See e.g. docker volume inspect minikube for the details on it
So you may want to try to use the /var dir as workaround.
If the previous solution doesn't meet your expectations and you still want to use docker as your minikube driver - don't, because you can't use extra mounts with docker (as far as I know). Use a VM.
Other solution: if you don't like the idea of using a VM, use kind (kuberntes in docker).
kind supports extra mounts. To configure it, check the kind documentation on extra mounts:
Extra Mounts
Extra mounts can be used to pass through storage on the host to a kind node for persisting data, mounting through code etc.
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes:
- role: control-plane # add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /path/to/my/files/
containerPath: /files
And the rest is like you described: You need to create a PV with the same hostPath as specified by containerPath.
You could also use minikube without any driver by specifying --driver=none with minikube start, which can be usefull in some cases, but check the minikube docs for more information.

Related

Installing RabbitMQ on Minikube

I'm currently building a backend, that among other things, involves sending RabbitMQ messages from localhost into a K8s cluster where containers can run and pickup specific messages.
So far I've been using Minikube to carry out all of my Docker and K8s development but have ran into a problem when trying to install RabbitMQ.
I've been following the RabbitMQ Cluster Operator official documentation (installing) (using). I got to the "Create a RabbitMQ Instance" section and ran into this error:
1 pod has unbound immediate persistentVolumeClaims
I fixed it by continuing with the tutorial and adding a PV and PVC into my RabbitMQCluster YAML file. Tried to apply it again and came across my next issue:
1 insufficient cpu
I've tried messing around with resource limits and requests in the YAML file but no success yet. After Googling and doing some general research I noticed that my specific problems and setup (Minikube and RabbitMQ) doesn't seem to be very popular. My question is, have I passed the scope or use case of Minikube by trying to install external services like RabbitMQ? If so what should be my next step?
If not, are there any useful tutorials out there for installing RabbitMQ in Minikube?
If it helps, here's my current YAML file for the RabbitMQCluster:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster
spec:
persistence:
storageClassName: standard
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbimq-pvc
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbitmq-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
hostPath:
path: /mnt/app/rabbitmq
type: DirectoryOrCreate
Edit:
Command used to start Minikube:
minikube start
Output:
๐Ÿ˜„ minikube v1.17.1 on Ubuntu 20.04
โœจ Using the docker driver based on existing profile
๐Ÿ‘ Starting control plane node minikube in cluster minikube
๐Ÿ”„ Restarting existing docker container for "minikube" ...
๐ŸŽ‰ minikube 1.18.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.18.1
๐Ÿ’ก To disable this notice, run: 'minikube config set WantUpdateNotification false'
๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
๐Ÿ”Ž Verifying Kubernetes components...
๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass, dashboard
๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
According to the command you used to start minikube, the error is because you don't have enough resources assigned to your cluster.
According to the source code from the rabbitmq cluster operator, it seems that it needs 2CPUs.
You need to adjust the number of CPUs (and probably the memory also) when you initialize your cluster. Below is an example to start a kubernetes cluster with 4 cpus and 8G of RAM :
minikube start --cpus=4 --memory 8192
If you want to check your current allocated ressources, you can run kubectl describe node.
Raising cpus and memory definitely gets things un-stuck, but if you are running minikube on a dev machine - which is probably a laptop - this is a pretty big resource requirement. I get that Rabbit runs in some really big scenarios, but can't it be configured to work in small scenarios?

How to store my pod logs in a persistent storage?

I have generated logs for my pods using kubectl logs 'pod name. But I want to persist these logs in a volume (some kind of persistent storage), because container logs will get wiped out if the pods go down. Is there a way to do this? Do I have to write some sort of a script?
I have read many answers but I still do not understand how to go about it, any help is appreciated. Thanks!
Under Logging Architecture Kubernetes documents goes thru couple of way to set up loggin in your cluster.
The most interesting for you might be Cluster-level logging architecture:
While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application
There are many solutions for collecting pod logs and shipping them to a centralized location such as:
fluentd
splunk
elastic
Keeping logs outside of cluster has benefits. If you cluster begins to have issues its more likely that your inside logging architecure will also face them.
You will need to mount the logs directory inside the container to the host machine as well, using the PersistentVolume and PersistentVolumeClaim.
This way you can persist these logs even if the container is killed.
Create the PersistentVolume and PersistentVolumeClaim for the log path and use them as volume mounts to the kubernetes deployments or pods.
I know this is an old question, but I've just had the same problem and I've spent some time to figure out the solution, so I'd like to share a more detailed solution.
Like Aayush Mall said, you'll need the PersistentVolume and PersistentVolumeClaim objects to create the volume and then link it to the pod (preferably via a Deployment object).
Basically, The PersistentVolume would define how and where the volume would be stored in the host and the PersistentVolumeClaim would define the constraints to bind the volume to some container.
From the docs:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).
So, let's say your pods are running in two nodes: mynode-1 and mynode-2.
Your PersistentVolume spec will look like this.
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-log-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/log/myapp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- mynode-1
- mynode-2
Your PersistentVolumeClaim like this.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-log-pvc
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
resources:
requests:
storage: 2Gi
volumeName: myapp-log
And then, you just have to tell the deployment object how to mount the volume inside the container. So, your Deployment spec will look like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myrepo/myapp:latest
volumeMounts:
- name: log
mountPath: /var/log
volumes:
- name: log
persistentVolumeClaim:
claimName: myapp-log-pvc
And that's it. When your deployment starts, it'll create the pod with the container, mount a volume named log for the path /var/log (inside the container) and bound this volume to some PV matching the requirements of the PVC named myapp-log-pvc. As we've created the myapp-log-pv with the same volumeMode, accessModes and storageClassName fields and with more storage capacity then the required by myapp-log-pvc, they will be bound. So, your app logs will be stored in the path /var/log/myapp (field spec.local.path in the myapp-log-pv spec) inside the node running the pod.
I hope it help :)
Also, I'm kinda new in the kubernetes world, so please let me know if you notice I misunderstood something or if there is a better way to do this.

Unable to mount NFS on Kubernetes Pod

I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events

AZDATA BDC CREATE stuck. Control containers pending. Scheduling error on NFS PVC

I am very new to Linux, Docker and Kubernetes. I need to setup an on-premises POC to showcase BDC.
What I have installed.
1. Ubuntu 19.10
2. Kubernetes Cluster
3. Docker
4. NFS
5. Settings and prerequisites but obviously missing stuff.
This has been done with stitched together tutorials. I am stuck on "AZDATA BDC Create". Error below.
Scheduling error on POD PVC.
Some more information.
NFS information
Storage class info
More Info 20191220:
PV & PVcs bound NFS side
Dynamic Volume Provisioning alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
First make sure you have defined StorageClass properly. You have defined nfs-dynamic class but it is not defined as default storage class, that's why your claims cannot bound volumes to it. You have two options:
1. Execute command below:
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Another option is to define in pvc configuration file storageclass you have used:
Here is an example cofiguration of such file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}'
Simple add line storageClassName: nfs-dynamic.
Then make sure you have followed steps from this instruction: nfs-kubernetes.

How to mount local volume to ksonnet component deployed in kubeflow

I am trying to mount a local directory into a component deployed in kubeflow using ksonnet prototype.
There is no way to mount a local directory into a Kubernetes pod (after all kubeflow and ksonnet just create pods and other Kubernetes resources).
If you want your files to be available in Kubernetes I can think in two options:
Create a custom docker image, copying the folder you want, and push it to a registry. Kubeflow has parameters to customize the images to be deployed.
Use NFS. That way you could mount the NFS volume in your local and also in the pods. To do that you should modify the ksonnet code, since in the last stable version it is not implemented.
If you provide more information about what component are you trying to deploy and which cloud provider you're using, I could help you more
If by local directory you mean local directory on the node, then it is possible to mount a directory on the nodeโ€™s filesystem inside a pod using HostPath or Local Volumes feature.
A hostPath volume mounts a file or directory from the host nodeโ€™s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volumeโ€™s node constraints by looking at the node affinity on the PersistentVolume.
For example:
# HostPaht Volume example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
# local volume example (beta in v1.10)
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
GlusterFS is also available as Volume or as Persistent Volume (Access modes:ReadWriteOnce,ReadOnlyMany,ReadWriteMany)
A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. This means that a glusterfs volume can be pre-populated with data, and that data can be โ€œhanded offโ€ between Pods. GlusterFS can be mounted by multiple writers simultaneously.
See the GlusterFS example for more details.