I have an mysql container I'm deploying through k8s in which I am mounting a directory which contains a script, once the pod is up and running the plan is to execute that script.
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
template:
spec:
volumes:
- name: mysql-stuff
hostPath:
path: /home/myapp/scripts
type: Directory
containers:
- name: mysql-db
image: mysql:latest
volueMounts:
- name: mysql-stuff
mountPath: /scripts/
Once I have it up and running and run kubectl exec -it mysql-db -- bin/sh and ls scripts it returns nothing and the script that should be inside it is not there and I can't work out why.. For the sake of getting this working I have added no security context and am running the container as root. Any help would be greatly appreciated.
Since you are running your pod in a minikube cluster. Minikube itself is running in a VM , so the path mapping here implies the path of minikube VMs not your actual host.
However you can map your actual host path to the minikube path and then it will become accessible.
minikube mount /home/myapp/scripts:/home/myapp/scripts
See more here
https://minikube.sigs.k8s.io/docs/handbook/mount/
Related
Scenario is like this:
I have a pod running in a node in K8s cluster in GCP. cluster is created using kops and pod is created using kne_cli.
I know only the name of the pod e.g. "test-pod".
My requirement is to configure something in the node where this pod is running. e.g. I want to update "iptables -t nat" table in node.
how to access the node and configure it from within a pod?
any suggestion will be helpful.
You the Job or deployment or POD, not sure how POD is getting managed. If you just want to run that task Job is good fir for you.
One option is to use SSH way :
You can run one POD inside that you get a list of Nodes or specific node as per need and run SSH command to connect with that node.
That way you will be able to access Node from POD and run commands top of Node.
You can check this document for ref : https://alexei-led.github.io/post/k8s_node_shell/
Option two :
You can mount sh file on Node with IP table command and invoke that shell script from POD to execute which will run the command whenever you want.
Example :
apiVersion: v1
kind: ConfigMap
metadata:
name: command
data:
command.sh: |
#!/bin/bash
echo "running sh script on node..!"
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: command
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: cron-namespace-admin
containers:
- name: command
image: IMAGE:v1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: commandfile
mountPath: /test/command.sh
subPath: command.sh
- name: script-dir
mountPath: /test
restartPolicy: OnFailure
volumes:
- name: commandfile
configMap:
name: command
defaultMode: 0777
- name: script-dir
hostPath:
path: /var/log/data
type: DirectoryOrCreate
Use privileged mode
securityContext:
privileged: true
Privileged - determines if any container in a pod can enable
privileged mode. By default a container is not allowed to access any
devices on the host, but a "privileged" container is given access to
all devices on the host. This allows the container nearly all the same
access as processes running on the host. This is useful for containers
that want to use linux capabilities like manipulating the network
stack and accessing devices.
Read more : https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged
You might be better off using GKE and configuring the ip-masq-agent as described here: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
In case you stick with kops on GCE, I would suggest following the guide for ip-masq-agent here instead of the GKE docs: https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/
In case you really need to run custom iptables rules on the host then your best option is to create a DaemonSet with pods that are privileged and have hostNetwork: true. That should allow you to modify iptable rules directly on the host from the pod.
I want to share my non-empty local directory with kind cluster.
Based on answer here: How to reference a local volume in Kind (kubernetes in docker)
I tried few variations of the following:
Kind Cluster yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /Users/xyz/documents/k8_automation/data/manual/
containerPath: /host_manual
extraPortMappings:
- containerPort: 30000
hostPort: 10000
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: manual
spec:
serviceAccountName: manual-sa
containers:
- name: tools
image: tools:latest
imagePullPolicy: Never
command:
- bash
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent/data
name: data
volumes:
- name: data
hostPath:
path: /host_manual
type: Directory
---
I see that the directory /home/jenkins/agent/data does exist when the pod gets created. However, the folder is empty.
kinds documentation here: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
It should be the case that whatever is in the local machine at hostpath (/Users/xyz/documents/k8_automation/data/manual/) in extraMounts in the cluster yaml be available to the node at containerPath (/host_manual), which then gets mounted at container volume mounthPath (/home/jenkins/agent/data).
I should add that even if I change the hostPath in the cluster yaml file to a non-existent folder, the empty "data" folder still gets mounted in the container, so I think it's the connection from my local to kind cluster that's the issue.
Why am I not getting the contents of /Users/xyz/documents/k8_automation/data/manual/ with it's many files also available at /home/jenkins/agent/data in the container?
How can I fix this?
Any alternatives if there is no fix?
Turns out these yaml configuration was just fine.
The reason the directory was not showing up in the container was related with docker settings. And because "kind is a tool for running local Kubernetes clusters using Docker container “nodes”", it matters.
It seems docker restricts resource sharing and allows only specific directories to be bind mounted into Docker containers by default. Once I added the specific directory I wanted to show up in the container to the list of directories under Preferences -> Resources -> File sharing, it worked!
I am unable to upload a file through a deployment YAML in Kubernetes.
The deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: openjdk:14
ports:
- containerPort: 8080
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
workingDir: "/usr/src/myapp"
command: ["java"]
args: ["-jar", "docker.jar"]
volumes:
- hostPath:
path: "C:\\Users\\user\\Desktop\\kubernetes\\docker.jar"
type: File
name: testing
I get the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/test-64fb7fbc75-mhnnj to minikube
Normal Pulled 13s (x3 over 15s) kubelet Container image "openjdk:14" already present on machine
Warning Failed 12s (x3 over 14s) kubelet Error: Error response from daemon: invalid mode: /usr/src/myapp/docker.jar
When I remove the volumeMount it runs with the error unable to access docker.jar.
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
This is a community wiki asnwer. Feel free to expand it.
That is a known issue with Docker on Windows. Right now it is not possible to correctly mount Windows directories as volumes.
You could try some of the workarounds mentioned by #CodeWizard in this github thread like here or here.
Also, if you are using VirtualBox, you might want to check this solution:
On Windows, you can not directly map Windows directory to your
container. Because your containers are reside inside a VirtualBox VM.
So your docker -v command actually maps the directory between the VM
and the container.
So you have to do it in two steps:
Map a Windows directory to the VM through VirtualBox manager Map a
directory in your container to the directory in your VM You better use
the Kitematic UI to help you. It is much eaiser.
Alternatively, you can deploy your setup on Linux environment to completely omit those specific kind of issues.
I have a service which runs in apache. The container status is showing as completed and restarting. Why container is not maintaining its state as running even though the arguments passed does not have issues?
apiVersion: apps/v1
kind: Deployment
metadata:
name: ***
spec:
selector:
matchLabels:
app: ***
replicas: 1
template:
metadata:
labels:
app: ***
spec:
containers:
- name: ***
image: ****
command: ["/bin/sh", "-c"]
args: ["echo\ sid\ |\ sudo\ -S\ service\ mysql\ start\ &&\ sudo\ service\ apache2\ start"]
volumeMounts:
- mountPath: /var/log/apache2/
name: apache
- mountPath: /var/log/***/
name: ***
imagePullSecrets:
- name: regcred
volumes:
- name: apache
hostPath:
path: "/home/sandeep/logs/apache"
- name: vusmartmaps
hostPath:
path: "/home/sandeep/logs/***"
Soon after executing this arguments it is showing its status as completed and going to a loop. What we can do to maintain it status as running.
Please be advised this is not a good practice.
If you really want this working that way your last process must not end.
For example add sleep 9999 to your container.args
Best options would be splitting those into 2 separate Deployments.
First, would be easy to scale them independently.
Second, image would be smaller for each Deployment.
Third, Kubernetes would have a full control over those Deployments and you could utilize self-healing and rolling-updates.
There is a really good guide and examples on Deploying WordPress and MySQL with Persistent Volumes, which I think would be perfect for you.
But if you prefer to use just one pod then you would need to split you image or using official Docker images and your pod might look like this:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: test
spec:
containers:
- name: mysql
image: mysql:5.6
- name: apache
image: httpd:alpine
ports:
- containerPort: 80
volumeMounts:
- name: apache
mountPath: /var/log/apache2/
volumes:
- name: apache
hostPath:
path: "/home/sandeep/logs/apache"
You would need to expose the pod using Service:
$ kubectl expose pod app --type=NodePort --port=80
service "app" exposed
Checking what port it has:
$ kubectl describe service app
...
NodePort: <unset> 31418/TCP
...
Also you should read Communicate Between Containers in the Same Pod Using a Shared Volume.
You want to start apache and mysql in the same container and keep it running, aren't you?
Well, lets break down why it exits first. Kubernetes, just like Docker, will run whatever command you would give inside the container. If that command finishes, container would stop. echo sid | sudo -S service mysql start && sudo service apache2 start will ask your init process to start both mysql and apache, but the thing is that Kubernetes is not aware of your init inside the container.
In fact, the command statement will become instead of init process with pid 1, overriding whatever default startup command you have in your container image. Whenever process with pid 1 exits, container stops.
Therefore in your case you have to start whatever init system you have in your container.
However we come closer to another problem - Kubernetes already acts as init system. It starts your pods and supervises them. Therefore all you need is to start two containers instead - one for mysql and another one for apache.
For example you could use official dockerhub images from https://hub.docker.com//httpd/ and https://hub.docker.com//mysql. They already come with both services configured to startup correctly, therefore you don't even have to specify command and args in your deployment manifest.
Containers are not tiny VMs. You need two in this case, one running MySQL and another running Apache. Both have standard community images available, which I would probably start with.
The following problem occurs on a Kubernetes cluster with 1 master and 3 nodes and also on a single-machine Kubernetes.
I set up the Kubernetes with flexvolume smb support (https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/smb). When I apply a new pod with flexvolume the Node mounts the smb share as expected. But the Pod points his share to some docker directory on the Node.
My installation:
latest CentOS 7
latest Kubernetes v1.14.0
(https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
disabled SELinux and disabled firewall
Docker 1.13.1
jq and cifs-utils
https://raw.githubusercontent.com/Azure/kubernetes-volume-drivers/master/flexvolume/smb/deployment/smb-flexvol-installer/smb installed to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/microsoft.com~smb and executable
Create Pod with
smb-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: smb-secret
type: microsoft.com/smb
data:
username: YVVzZXI=
password: YVBhc3N3b3Jk
nginx-flex-smb.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-flex-smb
spec:
containers:
- name: nginx-flex-smb
image: nginx
volumeMounts:
- name: test
mountPath: /data
volumes:
- name: test
flexVolume:
driver: "microsoft.com/smb"
secretRef:
name: smb-secret
options:
source: "//<host.with.smb.share>/kubetest"
mountoptions: "vers=3.0,dir_mode=0777,file_mode=0777"
What happens
Mount point on Node is created on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb.
mount returns //<host.with.smb.share>/kubetest on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb/test type cifs (rw,relatime,vers=3.0,cache=strict,username=aUser,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=172.27.72.43,file_mode=0777,dir_mode=0777,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
read and write works as expected on host and on the Node itself
on Pod
mountfor /data points to tmpfs on /data type tmpfs (rw,nosuid,nodev,seclabel,size=898680k,nr_inodes=224670,mode=755)
but the content of the directory /data comes from /run/docker/libcontainerd/8039742ae2a573292cd9f4ef7709bf7583efd0a262b9dc434deaf5e1e20b4002/ on the node.
I tried to install the Pod with a PersistedVolumeClaime and get the same problem. Searching for this problem got me no solutions.
Our other pods uses GlusterFS and heketi which works fine.
Is there maybe a configuration failure? Something missing?
EDIT: Solution
I upgraded Docker to the latest validated Version 18.06 and everything works well now.
I upgraded Docker to the latest validated Version 18.06 and everything works well now.
To install it follow the instructions on Get Docker CE for CentOS.