How does the copy artifacts job work in Kubernetes - kubernetes

I am trying to run a hyperledger fabric blockchain network on kubernetes using https://github.com/IBM/blockchain-network-on-kubernetes as the reference. In one of the steps, the atrifacts (chaincode, configtx.yaml ) are copied into the volume using the below yaml file
https://github.com/IBM/blockchain-network-on-kubernetes/blob/master/configFiles/copyArtifactsJob.yaml
I am unable to understand how the files are copied into the shared persistent volume. Does the entry point command on line 24 copy the artifaces to the persistent volume? I do not see cp here. So how does the copy happen?
command: ["sh", "-c", "ls -l /shared; rm -rf /shared/*; ls -l /shared; while [ ! -d /shared/artifacts ]; do echo Waiting for artifacts to be copied; sleep 2; done; sleep 10; ls -l /shared/artifacts; "]

Actually this job does not copy anything. It is just used to wait until copy complete.
Look at setup_blockchainNetwork.sh script. Actual copy is happening at line 82.
kubectl cp ./artifacts $pod:/shared/
This line copy content of ./artifact into the /shared directory of shared-pvc volume.
The job just make sure that copy is completed before processing further task. When copy is done, the job will find the files in /shared/artifacts directory and will go to completion. When the job is completed, the script proceed to further task. Look at the condition here.

Related

Run a pod with tar and try to push file into the mount point

Our basic need is to check whether we are able to copy/push a file to a mountpoint or not. For this, I am advised to run a pod with tar and try to push file into the mount point. I have searched through the web and got the following commands:
-> kubectl cp [file-path] [pod-name]:/[path] (Although not giving any error but this command is not working and the file is not visible in the mentioned location.)
-> Verified the absence of file in the remote pod using the following command:
kubectl exec <pod_name> -- ls -la /
-> Found the below command that uses tar options but I don't want to exclude any file and hence not sure
how to proceed with this:
kubectl exec -n <some-namespace> <some-pod> -- tar cf - --exclude='pattern' /tmp/foo | tar xf - -C
/tmp/bar
-> Is there any other tar option that can help me in pushing the file to the mountpoint?
Also, the kubectl cp help command says that tar binary must be present for copy to work. Maybe this is the reason why I am unable to copy. But, I don't know how to check the tar binary's presence and how to get it if it's not there. Please help me with this.
I'm not sure why cp command not worked for you. However I tried to add a tar file inside the pod and it worked.
I used the following command:
kubectl cp ./<TAR FILE PATH> <NAMESPACE>/<POD NAME>:/<INSIDE POD PATH>
It's not best practice to add a file like this to a pod. You can also init a container or add the file during the build process of the docker image. You can also alternatively use a volume mount.

Copy a file into kubernetes pod without using kubectl cp

I have a use case where my pod is run as non-rootuser and its running a python app.
Now I want to copy file from master node to running pod. But when I try to run
kubectl cp app.py 103000-pras-dev/simplehttp-777fd86759-w79pn:/tmp
This command hungs up but when i run pod as root user and then run the same command
it executes successfully. I was going through the code of kubectl cp where it internally uses tar command.
Tar command has got multiple flags like --overwrite --no-same-owner, --no-preserve and few others. Now from kubectl cp we can't pass all those flag to tar. Is there any way by which I can copy file using kubectl exec command or any other way.
kubectl exec simplehttp-777fd86759-w79pn -- cp app.py /tmp/ **flags**
If the source file is a simple text file, here's my trick:
#!/usr/bin/env bash
function copy_text_to_pod() {
namespace=$1
pod_name=$2
src_filename=$3
dest_filename=$4
base64_text=`cat $src_filename | base64`
kubectl exec -n $namespace $pod_name -- bash -c "echo \"$base64_text\" | base64 -d > $dest_filename"
}
copy_text_to_pod my-namespace my-pod-name /path/of/source/file /path/of/target/file
Maybe base64 is not necessary. I put it here in case there is some special character in the source file.
Meanwhile I found a hack, disclaimer this is not the exact kubectl cp just a workaround.
I have written a go program where I have created a goroutine to read file and attached that to stdin and ran kubectl exec tar command with proper flags. Here is what I did
reader, writer := io.Pipe()
copy := exec.CommandContext(ctx, "kubectl", "exec", pod.Name, "--namespace", pod.Namespace, "-c", container.Name, "-i",
"--", "tar", "xmf", "-", "-C", "/", "--no-same-owner") // pass all the flags you want to
copy.Stdin = reader
go func() {
defer writer.Close()
if err := util.CreateMappedTar(writer, "/", files); err != nil {
logrus.Errorln("Error creating tar archive:", err)
}
}()
Helper function definition
func CreateMappedTar(w io.Writer, root string, pathMap map[string]string) error {
tw := tar.NewWriter(w)
defer tw.Close()
for src, dst := range pathMap {
if err := addFileToTar(root, src, dst, tw); err != nil {
return err
}
}
return nil
}
Obviously, this thing doesn't work because of permission issue but *I was able to pass tar flags
If it is only a text file it can be also "copied" via netcat.
1) You have to be logged on both nodes
$ kubectl exec -ti <pod_name> bash
2) Make sure to have netcat, if not install them
$ apt-get update
$ apt-get install netcat-openbsd
3) Go to the folder with permissions i.e.
/tmp
4) Inside the container where you have python file write
$ cat app.py | nc -l <random_port>
Example
$ cat app.py | nc -l 1234
It will start listening on provided port.
5) Inside the container where you want have the file
$ nc <PodIP_where_you_have_py_file> > app.py
Example
$ nc 10.36.18.9 1234 > app.py
It must be POD IP, it will not recognize pod name. To get ip use kubectl get pods -o wide
It will copy content of app.py file to the other container file. Unfortunately, you will need to add permissions manual or you can use script like (sleep is required due to speed of "copying"):
#!/bin/sh
nc 10.36.18.9 1234 > app.py | sleep 2 |chmod 770 app.py;
Copy a file into kubernetes pod without using kubectl cp
kubectl cp is bit of a pain to work with. For example:
installing kubectl and configuring it (might need it on multiple machines). In our company, most people only have a restrictive kubectl access from rancher web GUI. No CLI access is provided for most people.
network restrictions in enterprises
Large file downloads/uploads may stop or freeze sometimes probably because traffic goes through k8s API server.
weird tar related errors keep popping up etc..
One of the reasons for lack of support to copy the files from a pod(or other way around) is because k8s pods were never meant to be used like a VM.. They are meant to be ephemeral. So, the expectation is to not store/create any files on the pod/container disk.
But sometimes we are forced to do this, especially while debugging issues or using external volumes..
Below is the solution we found effective. This might not be right for you/your team.
We now instead use azure blob storage as a mediator to exchange files between a kubernetes pod and any other location. The container image is modified to include azcopy utility (Dockerfile RUN instruction below will install azcopy in your container).
RUN /bin/bash -c 'wget https://azcopyvnext.azureedge.net/release20220511/azcopy_linux_amd64_10.15.0.tar.gz && \
tar -xvzf azcopy_linux_amd64_10.15.0.tar.gz && \
cp ./azcopy_linux_amd64_*/azcopy /usr/bin/ && \
chmod 775 /usr/bin/azcopy && \
rm azcopy_linux_amd64_10.15.0.tar.gz && \
rm -rf azcopy_linux_amd64_*'
Checkout this SO question for more on azcopy installation.
When we need to download a file,
we simply use azcopy to copy the file from within the pod to azure blob storage. This can be done either programmatically or manually.
Then we download the file to local machine from azure blob storage explorer. Or some job/script can pick up this file from blob container.
Similar thing is done for upload as well. The file is first placed in blob storage container. This can be done manually using blob storage explorer or can be done programmatically. Next, from within the pod azcopy can pull the file from blob storage and place it inside the pod.
The same can be done with AWS (S3) or GCP or using any other cloud provider.
Probably even SCP, SFTP, RSYNC can be used.

Can we see transfer progress with kubectl cp?

Is it possible to know the progress of file transfer with kubectl cp for Google Cloud?
No, this doesn't appear to be possible.
kubectl cp appears to be implemented by doing the equivalent of
kubectl exec podname -c containername \
tar cf - /whatever/path \
| tar xf -
This means two things:
tar(1) doesn't print any useful progress information. (You could in principle add a v flag to print out each file name as it goes by to stderr, but that won't tell you how many files in total there are or how large they are.) So kubectl cp as implemented doesn't have any way to get this out.
There's not a richer native Kubernetes API to copy files.
If moving files in and out of containers is a key use case for you, it will probably be easier to build, test, and run by adding a simple HTTP service. You can then rely on things like the HTTP Content-Length: header for progress metering.
One option is to use pv which will show time elapsed, data transferred and throughput (eg MB/s):
$ kubectl exec podname -c containername -- tar cf - /whatever/path | pv | tar xf -
14.1MB 0:00:10 [1.55MB/s] [ <=> ]
If you know the expected transfer size ahead of time you can also pass this to pv and it will then calculate a % progress and also an ETA, eg for a 100m transfer:
$ kubectl exec podname -c containername -- tar cf - /whatever/path | pv -s 100m | tar xf -
13.4MB 0:00:09 [1.91MB/s] [==> ] 13% ETA 0:00:58
You obviously need to have pv installed (locally) for any of the above to work.
It's not possible, but you can find here how to implement rsync with kubernetes, rsync shows you the progress of the transfer file.
rsync files to a kubernetes pod
I figured out a hacky way to do this. If you have bash access to the container you're copying to, you can do something like wc -c <file> on the remote, then compare that to the size locally. du -h <file> is another option, which gives human-readable output so it may be better
On MacOS, there is still the hacky way of opening the "Activity Monitor" on the "Network" tab. If you are copying with kubectl cp from your local machine to a distant pod, then the total transfer is shown in the "Sent Bytes" column.
Not of super high precision, but it sort of does the job without installing anything new.
I know it doesn't show an active progress of each file, but does output a status including byte count for each completed file, which for multiple files run via scripts, is almost as good as active progress:
kubectl cp local.file container:/path/on/container --v=4
Notice the --v=4 is verbose mode and will give you output. I found kubectl cp output shows from v=3 thru v=5.

How to copy files from kubernetes Pods to local system

I'm trying to copy files from Kubernetes Pods to my local system. I am getting the below error while running following command:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap ./test.cap
Output:
tar: home/azureuser/test: Cannot stat: No such file or directory tar:
Exiting with failure status due to previous errors error:
home/azureuser/test no such file or directory
I could see the file under above given path. I am really confused.
Could you please help me out?
As stated inkubectl help:
kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.
# Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
# Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
Options:
-c, --container='': Container name. If omitted, the first container in the pod will be chosen
Usage:
kubectl cp <file-spec-src> <file-spec-dest> [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can also login to your Containter and check if file is there:
kubectl exec -it aks-ssh2-6cd4948f6f-fp9tl /bin/bash
ls -la /home/azureuser/test.cap
If this still doesn't work, try:
You may try to copy your files to workdir and then retry to copy them using just their names. It's weird, but it works for now.
Consider advice of kchugalinskiy here #58692.
Let's say you are copying file from bin folder to local system. The command is
kubectl cp default/POD_NAME:bin/FILE_NAME /Users/username/FILE_NAME
You can connect to POD to verify if you are specifying correct file name
kubectl exec -ti POD_NAME bash
According to https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
kubectl cp <file-spec-src> <file-spec-dest> is equivalent to using
kubectl exec -n <some-namespace> <some-pod> -- tar cf - <src-file> | tar xf - -C <dest-file>
So technically if you do not have tar installed on the pod, you can do kubectl exec -n <some-namespace> <some-pod> -- cat <src-file> > <dest-file>
Assuming the file is small or already compressed, the effect should be the same, except you cannot use cat on a directory or a set of files.
The command in the question posted is absolutely right. As answered before, this particular issue seems to be a missing tar binary in the container. I actually did not know it was needed, but confirmed that the pod has it:
# find / -name tar
/bin/tar
/usr/lib/mime/packages/tar
/usr/share/doc/tar
My error was using . to copy to the current directory (works with cp and scp) because it needs the full path, as shown in the original question:
kubectl cp pod-name-shown-in-get-pods:path/to/filename /local/dir/filename
But not:
kubectl cp pod-name-shown-in-get-pods:path/to/filename .
Which gives:
error: open .: is a directory
tar: Removing leading `/' from member names
Now the tar in the error message makes sense!
Note that if there is a leading / in the source path, as in the following example:
kubectl cp pod-name-shown-in-get-pods:/etc/resolv.conf /local/dir/resolv.conf
You would also see:
tar: Removing leading `/' from member names
However, the warning can be ignored, as the file would still copied. Use etc/resolv.conf instead of /etc/resolv.conf in the above example, to copy without the warning.
"kubectl cp" command is used to copy files from pods to local path and vice versa
Copying file from pod to local
kubectl cp <pod_name>:<file_path> <destination_path>
Copying file from specific container of pod to local
kubectl cp <pod_name>:<file_path> <destination_path> -c specific_container
Copying file from local to pod
kubectl cp <local_source_path> <pod_name>:<destination_path>
kubectl cp command is already mentioned by some of the users on this thread.
kubectl cp <pod-id>:<path> <local-path> -n <namespace> -c <specific_container>
Note that to run this command tar utility should already be installed on the pod.
However I have come across few errors while running this command on windows PowerShell.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log P:\Users\nstty\Downloads\k8s-diags\server-logs\
error: one of src or dest must be a local file specification
error: one of src or dest must be a local file specification
When running this command on windows, don't use the full path of the local system. Use relative path instead (. or ..). Now using relative path in the below command but getting a different error.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log .
tar: Removing leading `/' from member names
error: open .: is a directory
error: open .: is a directory
If you are copying a file, then in the local path use the relative path along with the file name that you want for the copied file. kubectl will first create this file and then copy the contents to this file. Below is the working command.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log .\server-logs\server.log
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
This message is just a warning from tar utility in your pod. The file should be copied to your local system.
Alternate option: If you want to avoid kubectl cp, here is another approach which we use.
kubectl cp <pod-id>:<path> <destination-path> -n <namespace>
Worked for me.
You can mount a local directory into the pod.
Update your aks-ssh yaml file:
spec:
...
containers:
...
volumeMounts:
- name: test-dir
mountPath: /home/azureuser
...
volumes:
- name: test-dir
hostPath:
path: /path/to/your/local/dir
Now you can access your files in the local directory.
For people working on a Windows machine there is an additional gotcha: As at October 2021 you cannot include a drive letter in your local path.
So if you were to try a command like:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap C:/Temp/Test
you would get this error because kubectl cp sees the colon in the Windows path as the separator between a pod name and the path within the pod.
So it would see C:/Temp/Test as a pod named "C" with a path "/Temp/Test"
The way to get around this is to use a relative Windows path instead of an absolute path. It will need to be relative to your current working directory.
So if my current working directory is C:\Users\JoeBloggs and I wanted to copy down to C:\Temp\Test I'd need to use the command:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap ../../Temp/Test
Note that this issue looks like it may be fixed soon. See https://github.com/kubernetes/kubernetes/pull/94165
Maybe someone could met this error
tar: removing leading '/' from member names
error: open .: is a directory
Which induced by the following command
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./default.ini
To solve it, we should add a destination folder per doc
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./tmp/default.ini
This works for me:
kubectl cp "namespace"/"pod_name":"path_in_pod" "local_path"
Example:
kubectl cp mynamespace/mypod:var/www/html/index.html \Users\myuser\Desktop\index.html
I resolve this problem by set the source folder to be relative path.
If the file location is /home/azureuser/test.cap, and working dir is /home/azureuser/, the cmd is
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:test.cap ./test.cap
If anyone uses windows pods, it may be hard to get files copied to the pods from local machine with those linux paths for kubectl cp command:
Procedure to copy files from local machine to kubernetes pod: (especially windows container)
I want to copy node.aspx from my local machine to
podname:\c:\inetpub\wwwroot
First upload Node.aspx to your cloud drive, path will be
/home/{your_username} in my case /home/pranesh
Then find out the pod name, in my case its
aspx-deployment-84597d88f5-pk5nh, follow below command
PS /home/pranesh> kubectl cp /home/pranesh/Node.aspx aspx-deployment-84597d88f5-pk5nh:/Node.aspx
This copies the file to c drive of container,
then move file from c drive to required path with powershell
PS /home/pranesh> kubectl exec aspx-deployment-84597d88f5-pk5nh powershell "Copy-Item "C:\Node.aspx" -Destination "C:\inetpub\wwwroot""
Use the reverse procedure for copying from container to cloud drive and download.
kubectl cp will not work if your container does not have tar command in the PATH. From your error it sees like tar command is not available on your container.
https://github.com/kubernetes/kubernetes/issues/58512
Please explore other options
Kubernetes gives a file not found error when user does not have permissions to a pod. That was my problem.
On my side the issue was with having multiple containers inside the pod:
kubectl cp -c grafana \
metrics/grafana-5c4f76b49b-p88lc:/etc/grafana \
./grafana/etc
so set the container name with -c grafana and properly prefix the namespace of the pod in my case metrics/
I tried this method on Azure and it worked:
kubectl cp 'POD NAME':xyz.json test
kubectl cp is the command for copying
POD NAME = vote-app
xyz.json is the file that needs to be copied from pod
the test is the file created in the drive of the AZURE directory...
So final command would be:
kubectl cp vote-app:xyz.json test
test will get generated in Azure Directory and later u can download test file from the download option of Azure
I couldn't get kubectl to work for this. Was getting error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This worked for me instead:
docker cp CONTAINERID:FILEWITHPATH DESTFILENAME
where "CONTAINERID" was retrieved by calling docker ps.
you need to mention the namespace in which your pod is available.

Copy folder with wildcard from docker container to host

Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.