In docker-compose file I want to create a named volume which will target local drive for test purposes. For production we will use NFS.
I created the compose file as following,
version: '3.3'
services:
test:
build: .
volumes:
- type: volume
source: data_volume
target: /data
networks:
- network
volumes:
data_volume:
driver: local
driver_opts:
o: bind
type: none
device: c:/data
networks:
network:
driver: overlay
attachable: true
When I run the docker-compose up, I got the following error,
for test_test_1 Cannot create container for service test: failed to mount local volume:
mount c:/data:/var/lib/docker/volumes/test_data_volume/_data, flags: 0x1000: no such file
or directory
Even with errors, it still creates the named volume. So when I inspect it,
{
"CreatedAt": "2019-10-07T09:10:14Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "test",
"com.docker.compose.version": "1.24.1",
"com.docker.compose.volume": "data_volume"
},
"Mountpoint": "/var/lib/docker/volumes/test_data_volume/_data",
"Name": "test_data_volume",
"Options": {
"device": "c:/data",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
I'm still not sure why the Mountpoint is targeting that location.
I know I can achieve this without named volume (which I already did), but for future in the project we definitely need named volume.
Any suggestion how to achieve this?
Same here. Using Docker Desktop for Windows, I tried to mount the local path E:\Project\MyWebsite\code to the named volume but failed. Here's how I sorted this out.
First, I changed the path to ".":
volumes:
website:
driver: local
driver_opts:
type: none
device: "."
o: bind
This time docker-compose up ran successfully, so I logged into the shell and checked how the mounted directory looks like:
bash-5.0# ls -l
total 62
lrwxrwxrwx 1 root root 11 Oct 1 15:15 E -> /host_mnt/e
drwxr-xr-x 2 root root 14336 Sep 11 15:27 bin
drwxr-xr-x 4 root root 2048 Apr 19 2017 dev
lrwxrwxrwx 1 root root 11 Oct 1 15:15 e -> /host_mnt/e
drwxr-xr-x 1 root root 180 Sep 30 11:53 etc
drwxr-xr-x 2 root root 2048 Sep 11 15:27 home
drw-r--r-- 4 root root 80 Oct 8 22:52 host_mnt
drwxr-xr-x 1 root root 60 Sep 30 11:53 lib
drwxr-xr-x 5 root root 2048 Sep 11 15:27 media
...
drwxrwxrwt 1 root root 40 Oct 11 19:37 tmp
drwxr-xr-x 1 root root 80 Sep 11 15:27 usr
drwxr-xr-x 13 root root 2048 Sep 11 15:27 var
Obviously not a Windows volume, probably some Linux VM created by Docker. But the paths /host_mnt/e and /host_mnt/E seem indicative, so I tried changing the docker-compose definition to:
volumes:
website:
driver: local
driver_opts:
type: none
device: "/host_mnt/e/Project/MyWebsite/code"
o: bind
And it worked! Looks like named volume doesn't work the same as the ordinal way for Windows.
This /host_mnt/e probably won't exist unless the you've granted access to the drive letter before. But this shouldn't be an issue to you, as you'd tried the ordinal way of mounting a local drive which worked.
Related
I am trying to configure a fluend to send logs to an elasticsearch. After configuring it, I could not see any pod logs in the elasticsearch.
While debuging what is happening, I have seen that there are no logs in the node in path var/log/pods:
cd var/logs/pods
ls -la
drwxr-xr-x. 34 root root 8192 Dec 9 12:26 .
drwxr-xr-x. 14 root root 4096 Dec 9 02:21 ..
drwxr-xr-x. 3 root root 21 Dec 7 03:14 pod1
drwxr-xr-x. 6 root root 119 Dec 7 11:17 pod2
cd pod1/containerName
ls -la
total 0
drwxr-xr-x. 2 root root 6 Dec 7 03:14 .
drwxr-xr-x. 3 root root 21 Dec 7 03:14 ..
But I can see the logs when executing kubectl logs pod1
As I have seen in the documentation logs should be in this path. Do you have any idea why there are no logs stored in the node?
I have found what was happening. The problem was related with the log driver. It was configured to send the logs to journald:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' ID
journald
I have changed it to json-file. Now it is writing logs var/log/pods
Here there are the different logging configuration options
I am trying to set up a shared volume in a minikube Kubernetes cluster to allow multiple pods to communicate with each other. What is configured is:
A PVC using the nfs-server-provisioner dynamic provisioner
Multiple Pods (some are jobs) that mount the PVC
The goal is to have an init container in each pod that creates a directory on startup using the Pod's name as the directory name, and have a job scan that directory and do some stuff.
I have this configured, and no errors are thrown, but the directory isn't created.
When trying to do this manually I see some strange behavior; mkdir returns a non-error code but doesn't do anything:
< ssh into pod >
user#802542b3ccb195b001258094dc543606-1601299620-zcszs:~$ ls -al /output/
total 8
drwxrwxrwx 2 user users 4096 Sep 28 13:28 .
drwxr-xr-x 1 root root 4096 Sep 28 13:27 ..
user#802542b3ccb195b001258094dc543606-1601299620-zcszs:~$ mkdir /output/test
user#802542b3ccb195b001258094dc543606-1601299620-zcszs:~$ echo $#
0
user#802542b3ccb195b001258094dc543606-1601299620-zcszs:~$ ls -al /output/
total 8
drwxrwxrwx 2 user users 4096 Sep 28 13:28 .
drwxr-xr-x 1 root root 4096 Sep 28 13:27 ..
user#802542b3ccb195b001258094dc543606-1601299620-zcszs:~$
I am able to touch files:
user#802542b3ccb195b001258094dc543606-1601299740-bw2hj:~$ ls -al /output/
total 8
drwxrwxrwx 2 user users 4096 Sep 28 13:29 .
drwxr-xr-x 1 root root 4096 Sep 28 13:29 ..
user#802542b3ccb195b001258094dc543606-1601299740-bw2hj:~$ touch /output/test
user#802542b3ccb195b001258094dc543606-1601299740-bw2hj:~$ ls -al /output/
total 8
drwxrwxrwx 2 user users 4096 Sep 28 13:29 .
drwxr-xr-x 1 root root 4096 Sep 28 13:29 ..
-rw-r--r-- 1 user users 0 Sep 28 13:29 test
user#802542b3ccb195b001258094dc543606-1601299740-bw2hj:~$
Here is the nfs mount:
Filesystem Size Used Avail Use% Mounted on
10.110.46.205:/export/pvc-2e433dc6-018d-11eb-be1a-0242766f1f7c 252G 134G 107G 56% /output
The same behavior is observed when using regular volumes. I am using the Docker driver but also observed this with the virtualbox driver. Is this a minikube issue? I would expect mkdir to error out if it can't complete.
minikube version: v1.13.1
commit: 1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty
I'm running a WSL2 Ubuntu terminal with docker for windows, and every time I run docker-compose up the permissions of the folder that contains the project get changed.
Before running docker-compose:
drwxr-xr-x 12 cesarvarela cesarvarela 4096 Jun 24 15:37 .
After:
drwxr-xr-x 12 999 cesarvarela 4096 Jun 24 15:37
This prevents me from changing git branch, editing files, etc. I have to chown the folder again to my user to do that, but I would like to not having to do this everytime.
I've been trying to install custom pack using these links on a single node K8 cluster.
https://github.com/StackStorm/st2packs-dockerfiles/
https://github.com/stackstorm/stackstorm-ha
Stackstorm is installed successfully with default dashboard but when I tried to build custom pack and helm upgrade it's not working.
Here is my stackstorm pack dir and image Dockerfile:
/home/manisha.tanwar/st2packs-dockerfiles # ll st2packs-image/packs/st2_chef/
total 28
drwxr-xr-x. 4 manisha.tanwar domain users 4096 Apr 28 16:11 actions
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 aliases
-rwxr-xr-x. 1 manisha.tanwar domain users 211 Apr 28 16:11 pack.yaml
-rwxr-xr-x. 1 manisha.tanwar domain users 65 Apr 28 16:11 README.md
-rwxr-xr-x. 1 manisha.tanwar domain users 293 Apr 28 17:47 requirements.txt
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 rules
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 sensors
/home/manisha.tanwar/st2packs-dockerfiles # cat st2packs-image/Dockerfile
ARG PACKS="file:///tmp/stackstorm-st2"
FROM stackstorm/st2packs:builder AS builder
COPY packs/st2_chef /tmp/stackstorm-st2/
RUN ls -la /tmp/stackstorm-st2
RUN git config --global http.sslVerify false
# Install custom packs
RUN /opt/stackstorm/st2/bin/st2-pack-install ${PACKS}
###########################
# Minimize the image size. Start with alpine:3.8,
# and add only packs and virtualenvs from builder.
FROM stackstorm/st2packs:runtime
Image is created using command
docker build -t st2_chef:v0.0.2 st2packs-image
And then I changed values.yaml as below:
packs:
configs:
packs.yaml: |
---
# chef pack
image:
name: st2_chef
tag: 0.0.1
pullPolicy: Always
And run
helm upgrade <release-name>.
but it doesn't show anything on dashboard as well as cmd.
Please help, We are planning to upgrade to Stackstorm HA from standalone stackstorm and I need to get POC done for that.
Thanks in advance!!
Got it working with the help of community. Here's the link if anyone want to follow:
https://github.com/StackStorm/stackstorm-ha/issues/128
I wasn't using docker registery to push the image and use it in helm configuration.
Updated values.yaml as :
packs:
# Custom StackStorm pack configs. Each record creates a file in '/opt/stackstorm/configs/'
# https://docs.stackstorm.com/reference/pack_configs.html#configuration-file
configs:
core.yaml: |
---
image:
# Uncomment the following block to make the custom packs image available to the necessary pods
#repository: your-remote-docker-registry.io
repository: manishatanwar
name: st2_nagios
tag: "0.0.1"
pullPolicy: Always
The postgres image I am currently deploying with openshift is generally working great. However I need to persistently store the database data (of course) and to do so i created a persistent volume claim and mounted it to the postgres data directory like so:
- mountPath: /var/lib/pgsql/data/userdata
name: db-storage-volume
and
- name: db-storage-volume
persistentVolumeClaim:
claimName: db-storage
The problem I am facing now is that the initdb script wants to change the permission of that data folder, but it cant and the directory is assigned to a very weird user/group, as the output of ls -la /var/lib/pgsql/data indicates (including the failing command output):
total 12
drwxrwxr-x. 3 postgres root 21 Aug 30 13:06 .
drwxrwx---. 3 postgres root 17 Apr 5 09:55 ..
drwxrwxrwx. 2 nobody nobody 12288 Jun 26 11:11 userdata
chmod: changing permissions of '/var/lib/pgsql/data/userdata': Permission denied
How can I handle this? I mean the permissions are enough to read/write but initdb (and the base images initialization functions) really want to change the permission of that folder.
Just as I had sent my question I had an idea and it turns out it worked:
Change the mount to the parent folder /var/lib/pgsql/data/
Modify my entry script to include a mkdir /var/lib/pgsql/data/userdata when it runs first (aka the folder does not exist yet)
Now it is:
total 16
drwxrwxrwx. 3 nobody nobody 12288 Aug 30 13:19 .
drwxrwx---. 3 postgres root 17 Apr 5 09:55 ..
drwxr-xr-x. 2 1001320000 nobody 4096 Aug 30 13:19 userdata
Which works. Notice that the folder itself is still owned by nobody:nobody and is 777, but the created userdata folder is owned by the correct user.