ZFS mount dataset for zone - solaris

I shutdown my non-global zone and umount her point zfs zonepath.
command for umount:
zfs unmount -f zones-pool/one-zone
details:
zfs list | grep one
zones-pool/one-zone 15,2G 9,82G 32K /zones-fs/one-zone
zones-pool/one/rpool/ROOT/solaris 15,2G 9,82G 7,83G /zones-fs/one/root
in the above, it is seen that there is an occupied space, 9.82G of 15.2G
more details:
# zfs get mountpoint zones-pool/one-zone
NAME PROPERTY VALUE SOURCE
zones-pool/one-zone mountpoint /zones-fs/one-zone local
# zfs get mounted zones-pool/one-zone
NAME PROPERTY VALUE SOURCE
zones-pool/one-zone mounted no -
but, if I try mount point zfs
I can not see the content
step 1 mount:
zfs mount zones-pool/one-zone
step 2 see mount with df -h:
df -h | grep one
zones-pool/one-zone/rpool/ROOT/solaris 25G 32K 9,8G 1% /zones-fs/one-zone/root
zones-pool/one-zone 25G 32K 9,8G 1% /zones-fs/one-zone
step 3 list content:
ls -l /zones-fs/one-zone/root
total 0
why?
also in step 2, you see that df -h prints 1% used
I do not understand

To view contents of zoned dataset you need to start zone or mount it directly.
Zone files (root-fs) are located into dataset
zones-pool/one-zone/rpool/ROOT/solaris
To mount it you need to change its "zoned" option to off and set "mountpoint" option to path you want to mount.
This may be done via
zfs set zoned=off zones-pool/one-zone/rpool/ROOT/solaris
zfs set mountpoint=/zones-pool/one-zone-root-fs
Space into dataset may be occupied by snapshots and clones, you may check them by commands:
zfs list -t snap zones-pool
zfs get -H -r -o value,name origin dom168vol1 | grep -v '^-'
The first command displays all snapshots, the second command displays datasets which are depends from some snapshots (have not "-" origin property).

Related

How can I merge 2 mounted volumes on Debian 10?

I've got 2 volumes already mounted on a VPS from Hetzner.
The first one is on / with 1To space. The other one on /home with 2To space.
I want to get my 3 To together, how can I manage to merge those 2 volumes without erasing any data ?
Here is a picture of what a df -h looks like :
Thanks you very much !
You can merge them using a union filesystem. https://en.wikipedia.org/wiki/UnionFS
In Linux, you can find several kernel modules implementing a union fileystem such as aufs and overlayfs. Available modules may vary depending on kernel configuration in your vps. I will give you a configuration example using the aufs kernel module.
In first, create a subdirectory for each mount point:
mkdir -p /for-aufs
mkdir -p /home/for-aufs
To continue, create a directory for the aufs mount point:
mkdir -p /aufs
Edit your /etc/fstab file
none /aufs aufs br:/for-aufs=rw:/home/for-aufs=rw,sum,create=rr 0 0
Or using mount directly:
mount -t aufs -o br:/for-aufs=rw:/home/for-aufs=rw,sum,create=rr none /aufs
Then, you will see a new mount point in /aufs linked to /for-aufs (/dev/md2) and /home/for-aufs (/dev/md3) when executing df -h. When you are using /aufs mount point, a round-robing policy will create/modify files between /for-aufs and /home/for-aufs.
In conclusion, /for-aufs + /home/for-aufs = /aufs
Remember to read the manual for more information:
http://manpages.ubuntu.com/manpages/focal/man5/aufs.5.html

Can we see transfer progress with kubectl cp?

Is it possible to know the progress of file transfer with kubectl cp for Google Cloud?
No, this doesn't appear to be possible.
kubectl cp appears to be implemented by doing the equivalent of
kubectl exec podname -c containername \
tar cf - /whatever/path \
| tar xf -
This means two things:
tar(1) doesn't print any useful progress information. (You could in principle add a v flag to print out each file name as it goes by to stderr, but that won't tell you how many files in total there are or how large they are.) So kubectl cp as implemented doesn't have any way to get this out.
There's not a richer native Kubernetes API to copy files.
If moving files in and out of containers is a key use case for you, it will probably be easier to build, test, and run by adding a simple HTTP service. You can then rely on things like the HTTP Content-Length: header for progress metering.
One option is to use pv which will show time elapsed, data transferred and throughput (eg MB/s):
$ kubectl exec podname -c containername -- tar cf - /whatever/path | pv | tar xf -
14.1MB 0:00:10 [1.55MB/s] [ <=> ]
If you know the expected transfer size ahead of time you can also pass this to pv and it will then calculate a % progress and also an ETA, eg for a 100m transfer:
$ kubectl exec podname -c containername -- tar cf - /whatever/path | pv -s 100m | tar xf -
13.4MB 0:00:09 [1.91MB/s] [==> ] 13% ETA 0:00:58
You obviously need to have pv installed (locally) for any of the above to work.
It's not possible, but you can find here how to implement rsync with kubernetes, rsync shows you the progress of the transfer file.
rsync files to a kubernetes pod
I figured out a hacky way to do this. If you have bash access to the container you're copying to, you can do something like wc -c <file> on the remote, then compare that to the size locally. du -h <file> is another option, which gives human-readable output so it may be better
On MacOS, there is still the hacky way of opening the "Activity Monitor" on the "Network" tab. If you are copying with kubectl cp from your local machine to a distant pod, then the total transfer is shown in the "Sent Bytes" column.
Not of super high precision, but it sort of does the job without installing anything new.
I know it doesn't show an active progress of each file, but does output a status including byte count for each completed file, which for multiple files run via scripts, is almost as good as active progress:
kubectl cp local.file container:/path/on/container --v=4
Notice the --v=4 is verbose mode and will give you output. I found kubectl cp output shows from v=3 thru v=5.

Is "in the cloud" gsutil cp an atomic operation?

Assuming I have copied one object into a Google Cloud Storage bucket using the following command:
gsutil -h "Cache-Control:public,max-age=3600" cp -a public-read a.html gs://some-bucket/
I now want to copy this file "in the cloud" while keeping the public ACL and simultaneously updating the Cache-Control header:
gsutil -h "Cache-Control:no-store" cp -p gs://some-bucket/a.html gs://some-bucket/b.html
Is this operation atomic? I.e. can I be sure, that the object gs://some-bucket/b.html will become initially available with the modified Cache-Control:no-store header?
The reason for my question is: I'm using a Google Cloud Storage bucket as a CDN-backend. While I want most of the objects in the bucket to be cached by the CDN according to the max-age provided in the Cache-Control header I want to make sure that a few specific files, which are in fact copies of cacheable versions, are never cached by a CDN. It is therefore crucial that these objects – when being copied – never appear with a Cache-Control:public,max-age=XXX but immediately appear with a Cache-Control:no-store header as to eliminate the chance that a request coming from a CDN would read the copied object at a point in time where a max-age would still be present and hence cache the object which is supposed to never be cached.
Yes, copying to the new object with Cache-Control set will be atomic. You can verify this by looking at the metageneration property of the object.
For example, upload an object:
$ BUCKET=mybucket
$ echo foo | ./gsutil cp - gs://$BUCKET/foo.txt
Copying from <STDIN>...
/ [1 files][ 0.0 B/ 0.0 B]
Operation completed over 1 objects.
and you'll see that its initial metageneration is 1:
$ ./gsutil ls -L gs://$BUCKET/foo.txt | grep Meta
Metageneration: 1
Whenever an object's metadata is modified, the metageneration is changed. For example, if the cache control is updated later like so:
$ ./gsutil setmeta -h "Cache-Control:no-store" gs://$BUCKET/foo.txt
Setting metadata on gs://mybucket/foo.txt...
/ [1 objects]
Operation completed over 1 objects.
The new metageneration is 2:
$ ./gsutil ls -L gs://$BUCKET/foo.txt | grep Meta
Metageneration: 2
Now, if we run the copy command:
$ ./gsutil -h "Cache-Control:no-store" cp -p gs://$BUCKET/foo.txt gs://$BUCKET/bar.txt
Copying gs://mybucket/foo.txt [Content-Type=application/octet-stream]...
- [1 files][ 4.0 B/ 4.0 B]
Operation completed over 1 objects/4.0 B.
The metageneration of the new object is 1:
$ ./gsutil ls -L gs://$BUCKET/bar.txt | grep Meta
Metageneration: 1
This means that the object was written once and has not been modified since.

Centos7 "mount -a" "mount point /mnt/dev/ does not exist"

Adding the following line to the /etc/fstab and rebooting seems to work as expected, i.e. al of the files in the shared directory "DEV" are available and read-only.
/etc/fstab
//192.168.99.100/DEV /mnt/dev/ cifs _netdev,username=username,password=password,ro,uid=500,gid=1001 0 0
However, I am trying to mount this the machines provisioning and avoid rebooting, so I've tried doing a "mount -a" but get the following error:
[root#localhost ~]# mount -a
mount: mount point /mnt/dev/ does not exist
How can I make this mount available without rebooting?
ok, I guess that just adding an /etc/fstab entry automatically creates the mount directory at some point during the first reboot. Soo.... in order to avoid rebooting, I apparently need to manually create the directory first
mkdir /mnt/dev
mount -a
(rejoice)

How to analyze disk usage of a Docker container

I can see that Docker takes 12GB of my filesystem:
2.7G /var/lib/docker/vfs/dir
2.7G /var/lib/docker/vfs
2.8G /var/lib/docker/devicemapper/mnt
6.3G /var/lib/docker/devicemapper/devicemapper
9.1G /var/lib/docker/devicemapper
12G /var/lib/docker
But, how do I know how this is distributed over the containers?
I tried to attach to the containers by running (the new v1.3 command)
docker exec -it <container_name> bash
and then running 'df -h' to analyze the disk usage. It seems to be working, but not with containers that use 'volumes-from'.
For example, I use a data-only container for MongoDB, called 'mongo-data'.
When I run docker run -it --volumes-from mongo-data busybox, and then df -h inside the container, It says that the filesystem mounted on /data/db (my 'mongo-data' data-only container) uses 11.3G, but when I do du -h /data/db, it says that it uses only 2.1G.
So, how do I analyze a container/volume disk usage? Or, in my case, how do I find out the 'mongo-data' container size?
To see the file size of your containers, you can use the --size argument of docker ps:
docker ps --size
After 1.13.0, Docker includes a new command docker system df to show docker disk usage.
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 1 2.777 GB 2.647 GB (95%)
Containers 1 1 0 B 0B
Local Volumes 4 1 3.207 GB 2.261 (70%)
To show more detailed information on space usage:
$ docker system df --verbose
Posting this as an answer because my comments above got hidden:
List the size of a container:
du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" <container_name>`
List the sizes of a container's volumes:
docker inspect -f "{{.Volumes}}" <container_name> | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
Edit:
List all running containers' sizes and volumes:
for d in `docker ps -q`; do
d_name=`docker inspect -f {{.Name}} $d`
echo "========================================================="
echo "$d_name ($d) container size:"
sudo du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" $d`
echo "$d_name ($d) volumes:"
docker inspect -f "{{.Volumes}}" $d | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
done
NOTE: Change 'devicemapper' according to your Docker filesystem (e.g 'aufs')
The volume part did not work anymore so if anyone is insterested I just change the above script a little bit:
for d in `docker ps | awk '{print $1}' | tail -n +2`; do
d_name=`docker inspect -f {{.Name}} $d`
echo "========================================================="
echo "$d_name ($d) container size:"
sudo du -d 2 -h /var/lib/docker/aufs | grep `docker inspect -f "{{.Id}}" $d`
echo "$d_name ($d) volumes:"
for mount in `docker inspect -f "{{range .Mounts}} {{.Source}}:{{.Destination}}
{{end}}" $d`; do
size=`echo $mount | cut -d':' -f1 | sudo xargs du -d 0 -h`
mnt=`echo $mount | cut -d':' -f2`
echo "$size mounted on $mnt"
done
done
I use docker stats $(docker ps --format={{.Names}}) --no-stream to get :
CPU usage,
Mem usage/Total mem allocated to container (can be allocate with docker run command)
Mem %
Block I/O
Net I/O
Improving Maxime's anwser:
docker ps --size
You'll see something like this:
+---------------+---------------+--------------------+
| CONTAINER ID | IMAGE | SIZE |
+===============+===============+====================+
| 6ca0cef8db8d | nginx | 2B (virtual 183MB) |
| 3ab1a4d8dc5a | nginx | 5B (virtual 183MB) |
+---------------+---------------+--------------------+
When starting a container, the image that the container is started from is mounted read-only (virtual).
On top of that, a writable layer is mounted, in which any changes made to the container are written.
So the Virtual size (183MB in the example) is used only once, regardless of how many containers are started from the same image - I can start 1 container or a thousand; no extra disk space is used.
The "Size" (2B in the example) is unique per container though, so the total space used on disk is:
183MB + 5B + 2B
Be aware that the size shown does not include all disk space used for a container.
Things that are not included currently are;
- volumes
- swapping
- checkpoints
- disk space used for log-files generated by container
https://github.com/docker/docker.github.io/issues/1520#issuecomment-305179362
(this answer is not useful, but leaving it here since some of the comments may be)
docker images will show the 'virtual size', i.e. how much in total including all the lower layers. So some double-counting if you have containers that share the same base image.
documentation
You can use
docker history IMAGE_ID
to see how the image size is ditributed between its various sub-components.
Keep in mind that docker ps --size may be an expensive command, taking more than a few minutes to complete. The same applies to container list API requests with size=1. It's better not to run it too often.
Take a look at alternatives we compiled, including the du -hs option for the docker persistent volume directory.
Alternative to docker ps --size
As "docker ps --size" produces heavy IO load on host, it is not feasable running such command every minute in a production environment. Therefore we have to do a workaround in order to get desired container size or to be more precise, the size of the RW-Layer with a low impact to systems perfomance.
This approach gathers the "device name" of every container and then checks size of it using "df" command. Those "device names" are thin provisioned volumes that a mounted to / on each container. One problem still persists as this observed size also implies all the readonly-layers of underlying image. In order to address this we can simple check size of used container image and substract it from size of a device/thin_volume.
One should note that every image layer is realized as a kind of a lvm snapshot when using device mapper. Unfortunately I wasn't able to get my rhel system to print out those snapshots/layers. Otherwise we could simply collect sizes of "latest" snapshots. Would be great if someone could make things clear. However...
After some tests, it seems that creation of a container always adds an overhead of approx. 40MiB (tested with containers based on Image "httpd:2.4.46-alpine"):
docker run -d --name apache httpd:2.4.46-alpine // now get device name from docker inspect and look it up using df
df -T -> 90MB whereas "Virtual Size" from "docker ps --size" states 50MB and a very small payload of 2Bytes -> mysterious overhead 40MB
curl/download of a 100MB file within container
df -T -> 190MB whereas "Virtual Size" from "docker ps --size" states 150MB and payload of 100MB -> overhead 40MB
Following shell prints results (in bytes) that match results from "docker ps --size" (but keep in mind mentioned overhead of 40MB)
for c in $(docker ps -q); do \
container_name=$(docker inspect -f "{{.Name}}" ${c} | sed 's/^\///g' ); \
device_n=$(docker inspect -f "{{.GraphDriver.Data.DeviceName}}" ${c} | sed 's/.*-//g'); \
device_size_kib=$(df -T | grep ${device_n} | awk '{print $4}'); \
device_size_byte=$((1024 * ${device_size_kib})); \
image_sha=$(docker inspect -f "{{.Image}}" ${c} | sed 's/.*://g' ); \
image_size_byte=$(docker image inspect -f "{{.Size}}" ${image_sha}); \
container_size_byte=$((${device_size_byte} - ${image_size_byte})); \
\
echo my_node_dm_device_size_bytes\{cname=\"${container_name}\"\} ${device_size_byte}; \
echo my_node_dm_container_size_bytes\{cname=\"${container_name}\"\} ${container_size_byte}; \
echo my_node_dm_image_size_bytes\{cname=\"${container_name}\"\} ${image_size_byte}; \
done
Further reading about device mapper: https://test-dockerrr.readthedocs.io/en/latest/userguide/storagedriver/device-mapper-driver/
The docker system df command displays information regarding the amount of disk space used by the docker daemon.
docker system df -v