Error while copying local file to k8s container - kubernetes

I am trying to copy a jar file to specific pod's container by executing below command.
kubectl cp local_policy.jar podname:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/security.
I am getting below error.
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"tar\\\": executable file not found in $PATH\"\n"
Please help.

tar binary is necessary to run cp. It is in the help page of kubectl cp:
kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.

Just install tar binary in the container to/from which you want to copy files. This will allow the kubectl command to copy files from your local machine to the target container.
On Amazon linux you can install it via yum
yum install tar

Related

kubectl cp bitnami apache helm chart: cannot copy to exact location of pod filesystem

I'm trying to deploy a react app on my local machine with docker-desktop and its kubernetes cluster with bitnami apache helm chart.
I'm following this this tutorial.
The tutorial makes you publish the image on a public repo (step 2) and I don't want to do that. It is indeed possible to pass the app files through a persistent volume claim.
This is described in the following tutorial.
Step 2 of this second tutorial lets you create a pod pointing to a PVC and then asks you to copy the app files there by using command
kubectl cp /myapp/* apache-data-pod:/data/
My issues:
I cannot use the * wildcard or else I get an error. To avoid this I just run
kubectl cp . apache-data-pod:/data/
This instruction copies the files in the pod but it creates another data folder in the already existing data folder in the pod filesystem
After this command my pod filesystem looks like this
I tried executing
kubectl cp . apache-data-pod:/
But this copies the file in the root of the pod filesystem at the same location where first data folder is.
I need to copy the data directly in <my_pod>:/data/.
How can I achieve such behaviour?
Regards
**Use the full path in the command as mentioned below to copy local files to POD : *
kubectl cp apache-pod:/var/www/html/index.html /tmp
*If there are multiple containers on the POD, Use the below syntax to copy a file from local to pod:
kubectl cp /<path-to-your-file>/<file-name> <pod-name>:<fully-qualified-file-name> -c <container-name>
Points to remember :
While referring to the file path on the POD. It is always relative to the WORKDIR you have defined on your image.
Unlike Linux, the base directory does not always start from the / workdir is the base directory
When you have multiple containers on the POD you need to specify the container to use with the copy operation using -c parameter
Quick Example of kubectl cp : Here is the command to copy the index.html file from the POD’s /var/www/html to the local /tmp directory.
No need to mention the full path, when the doc root is the workdir or the default directory of the image.
kubectl cp apache-pod:index.html /tmp
To make it less confusing, you can always write the full path like this
kubectl cp apache-pod:/var/www/html/index.html /tmp
*Also refer to this stack question for more information.

How to move a postgresql backup from a server to a local machine?

Currently I am creating a backup just by using "pg_dump dbname > path" in the pod terminal but this only saves it into the OpenShift container.
How would I transfer the dump to a local device?
Is there a a command for grabbing a database backup and downloading it onto the local machine?
You can use oc cp to copy files from a container to the local machine:
# Copy /tmp/foo from a remote pod to /tmp/bar locally
oc cp <some-pod-name>:/tmp/foo /tmp/bar
So for example:
oc cp postgresql-1-ptcdm:/tmp/mybackupfile /home/myusername/mybackupfile
Note that this requires that the 'tar' binary is present in your container image. If 'tar' is not present, 'oc cp' will fail. For advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using oc exec.

How do I use lsquic (LiteSpeed QUIC and HTTP/3 library)?

https://github.com/litespeedtech/lsquic
I want to implement lsquic. after the setup in the readme, what should I do to send data from client to server and track the network traffic? For setup, do I just follow the three steps, install BoringSSL, LSQUIC and then docker? Would just copy and paste the commands in Terminal work?
Error message:
CMake Error: The current CMakeCache.txt directory /src/lsquic/CMakeCache.txt is different than the directory /Users/nini/Development/lsquic/boringssl/lsquic where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
The command '/bin/sh -c cd /src/lsquic && cmake -DBORINGSSL_DIR=/src/boringssl . && make' returned a non-zero code: 1
(base) pc-68-32:lsquic nini$ sudo docker run -it --rm lsquic http_client -s www.google.com -p / -o version=Q046
Password:
Unable to find image 'lsquic:latest' locally
docker: Error response from daemon: pull access denied for lsquic, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
You can build lsquic with docker and then run it (because of the "unable to find" error, i think you did not build the docker image). To do so, git clone (just) the lsquic repository, and run the commands given in the section titled "Building with Docker". The docker build will (o.a.) download boringssl and build it, so you don't have to do that yourself and then it will build lsquic for you.

gcloud command changes ownership of the current directory

I'm performing usual operation of fetching kubernetes cluster credentials from GCP. The gcloud command doesn't fetch the credentials and surprisingly updates the ownership of the local directory:
~/tmp/1> ls
~/tmp/1> gcloud container clusters get-credentials production-ng
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) Unable to write file [/home/vladimir/tmp/1]: [Errno 21] Is a directory: '/home/vladimir/tmp/1'
~/tmp/1> ls
ls: cannot open directory '.': Permission denied
Other commands, like gcloud container clusters list work fine. I've tried to reinstall the gcloud.
This happens if your KUBECONFIG has an empty entry, like :/Users/acme/.kube/config
gcloud resolves the empty value as the current directory, changes permissions and tries to write to it
Reported at https://issuetracker.google.com/issues/143911217
It happened to be a problem with kubectl. Reinstalling it solved this strange issue.
If you, like me, have stuck with strange gcloud behavior, following points could help to track an issue:
Checking alias command and if it's really pointing to the intended binary;
Launch separate docker container with gsutil and feed it your config files. If the gcloud container clusters get-credentials ... runs smoothly there, than it's the problem with binaries (not configuration):
docker run -it \
-v $HOME/.config:/root/.config \
-v $HOME/.kube:/root/.kube google/cloud-sdk:217.0.0-alpine sh
Problem with binary can be solved just by reinstalling/updating;
If it's a problem with configs, then you could back them up and reinstall kubectl / gsutil from scratch using not just apt-get remove ..., but apt-get purge .... Be aware: purge removes config files!
Hope this would help somebody else.

API error (500): Container command not found or does not exist

kubectl describe pods
logs
pods logs
logs
command:
- bundle exec unicorn -c config/unicorn/production.rb -E production
The container can't start on k8s and some errors occured .
But when I exec
docker run -d image [CMD]
The container works well.
"command" is an array, so each argument has to be a separate element, not all on one line
For anyone else running into this problem:
make sure the gems (including unicorn) are actually installed in the volume used by the container. If not, do a bundle install.
Another reason for this kind of error could be that the directory specified under working_dir (in the docker-compose.yml) does not exist (see Misleading error message "ERROR: Container command not found or does not exist.").