How to ls in etcd v3? - key-value

How can I ls "/config" directory to get "/a /b /c" in etcd v3?
I don't want to "get /config --prefix" and compare all keys for 10000 times.
/config
.../a/
.../b/
....../xxx # about 100000 keys
.../c/

ls seems to have been deliberately omitted in v3, but in your case something like this can work:
ETCDCTL_API=3 etcdctl get /config --prefix

Related

Passing Multiple Kubectl Commmands to a Pod through Ansible

I am having some trouble to pass multiple commands to a pod, running on a Rancher Machine through Ansible. Currently, I am trying to execute two kubectl commands on the same task. I am trying to change to the /tmp directory of the pod and then execute an ls. The problem is that if I try to run the commands in different tasks, I will ls(list) not the /tmp directory, but the default directory I am accessing every time I run a kubectl command. It is like every time I access the pod, with kubectl, I am running isolated tasks, not dependent on the task run before. Of course, I could simply run ls /tmp to list the /tmp directory, and I would only need one command and that would be fine, but that does not fulfill my objective with what I am trying to understand here.
I've assemble the following playbook to try to run both cd /tmp and ls on the same command. Take the following playbook as an example:
---
- hosts: localhost #group of hosts on host file
connection: local
remote_user: root
vars:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
collections:
- community.kubernetes
tasks:
- name: Change to /tmp and ls
command: |
kubectl --namespace=redmine exec redmine-quick-testing-6c57cc5d65-lwkww -- /bin/bash -c "cd /tmp"
kubectl --namespace=redmine exec redmine-quick-testing-6c57cc5d65-lwkww -- /bin/bash -c "ls"
Ansible version:
ansible 2.9.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
What could possibly be wrong?

How can I clear all children nodes of a data node, but NOT delete the data node itself in zookeeper?

I have a znode: /test
And /test has two children nodes: /test/data1, /test/data2
How can I delete /test/data1 and /test/data2, but at the same time, NOT delete the node /test?
You can execute something like the following:
zkCli.sh -server xxx ls /test | \
grep "^\[" | \
grep -o -P "\w*" | \
while read znode ; do zkCli.sh -server xxx delete /test/$znode ; done
This is only using zkCli.sh and bash commands but is not optimal because it connects to the ZooKeeper server multiple times (one for each direct child deletion + one to fetch the children list). A more straightforward approach would be to use a ZooKeeper client library like kazoo or ZooKeeper Java API for such a task.

Populating user home directory in JupyterHub

I'm trying to populate the home directory of the user on JupyterHub. I've followed the Zero to JupyterHub with Kubernetes guide and have a working cluster. I have the folders I want to copy in the container but I'm not sure how to copy them so that they're available to the user.
lifecycleHooks:
postStart:
exec:
command: ["cp", "-a", "mydir", "/home/jovyan/mydir"]
When I get a shell in my container the folders are there in /home/jovyan but when the exec hook runs these folders can't be found. I know I'm missing something simple here.
I found the best way is to copy the folders you need over to a directory other than /home/jovyan such as /tmp and then copy them from there.
I now have something like this in my config.yaml which allows running of multiple commands separated by a semi-colon
lifecycleHooks:
postStart:
exec:
command:
- "sh"
- "-c"
- >
cp -r /tmp/folder_a /home/jovyan;
cp -r /tmp/folder_b /home/jovyan

How to copy files from kubernetes Pods to local system

I'm trying to copy files from Kubernetes Pods to my local system. I am getting the below error while running following command:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap ./test.cap
Output:
tar: home/azureuser/test: Cannot stat: No such file or directory tar:
Exiting with failure status due to previous errors error:
home/azureuser/test no such file or directory
I could see the file under above given path. I am really confused.
Could you please help me out?
As stated inkubectl help:
kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.
# Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
# Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
# Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
# Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
Options:
-c, --container='': Container name. If omitted, the first container in the pod will be chosen
Usage:
kubectl cp <file-spec-src> <file-spec-dest> [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can also login to your Containter and check if file is there:
kubectl exec -it aks-ssh2-6cd4948f6f-fp9tl /bin/bash
ls -la /home/azureuser/test.cap
If this still doesn't work, try:
You may try to copy your files to workdir and then retry to copy them using just their names. It's weird, but it works for now.
Consider advice of kchugalinskiy here #58692.
Let's say you are copying file from bin folder to local system. The command is
kubectl cp default/POD_NAME:bin/FILE_NAME /Users/username/FILE_NAME
You can connect to POD to verify if you are specifying correct file name
kubectl exec -ti POD_NAME bash
According to https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
kubectl cp <file-spec-src> <file-spec-dest> is equivalent to using
kubectl exec -n <some-namespace> <some-pod> -- tar cf - <src-file> | tar xf - -C <dest-file>
So technically if you do not have tar installed on the pod, you can do kubectl exec -n <some-namespace> <some-pod> -- cat <src-file> > <dest-file>
Assuming the file is small or already compressed, the effect should be the same, except you cannot use cat on a directory or a set of files.
The command in the question posted is absolutely right. As answered before, this particular issue seems to be a missing tar binary in the container. I actually did not know it was needed, but confirmed that the pod has it:
# find / -name tar
/bin/tar
/usr/lib/mime/packages/tar
/usr/share/doc/tar
My error was using . to copy to the current directory (works with cp and scp) because it needs the full path, as shown in the original question:
kubectl cp pod-name-shown-in-get-pods:path/to/filename /local/dir/filename
But not:
kubectl cp pod-name-shown-in-get-pods:path/to/filename .
Which gives:
error: open .: is a directory
tar: Removing leading `/' from member names
Now the tar in the error message makes sense!
Note that if there is a leading / in the source path, as in the following example:
kubectl cp pod-name-shown-in-get-pods:/etc/resolv.conf /local/dir/resolv.conf
You would also see:
tar: Removing leading `/' from member names
However, the warning can be ignored, as the file would still copied. Use etc/resolv.conf instead of /etc/resolv.conf in the above example, to copy without the warning.
"kubectl cp" command is used to copy files from pods to local path and vice versa
Copying file from pod to local
kubectl cp <pod_name>:<file_path> <destination_path>
Copying file from specific container of pod to local
kubectl cp <pod_name>:<file_path> <destination_path> -c specific_container
Copying file from local to pod
kubectl cp <local_source_path> <pod_name>:<destination_path>
kubectl cp command is already mentioned by some of the users on this thread.
kubectl cp <pod-id>:<path> <local-path> -n <namespace> -c <specific_container>
Note that to run this command tar utility should already be installed on the pod.
However I have come across few errors while running this command on windows PowerShell.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log P:\Users\nstty\Downloads\k8s-diags\server-logs\
error: one of src or dest must be a local file specification
error: one of src or dest must be a local file specification
When running this command on windows, don't use the full path of the local system. Use relative path instead (. or ..). Now using relative path in the below command but getting a different error.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log .
tar: Removing leading `/' from member names
error: open .: is a directory
error: open .: is a directory
If you are copying a file, then in the local path use the relative path along with the file name that you want for the copied file. kubectl will first create this file and then copy the contents to this file. Below is the working command.
PS P:\Users\nstty\Downloads\k8s-diags> kubectl cp dremio-master-0:/var/log/dremio/server.log .\server-logs\server.log
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
This message is just a warning from tar utility in your pod. The file should be copied to your local system.
Alternate option: If you want to avoid kubectl cp, here is another approach which we use.
kubectl cp <pod-id>:<path> <destination-path> -n <namespace>
Worked for me.
You can mount a local directory into the pod.
Update your aks-ssh yaml file:
spec:
...
containers:
...
volumeMounts:
- name: test-dir
mountPath: /home/azureuser
...
volumes:
- name: test-dir
hostPath:
path: /path/to/your/local/dir
Now you can access your files in the local directory.
For people working on a Windows machine there is an additional gotcha: As at October 2021 you cannot include a drive letter in your local path.
So if you were to try a command like:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap C:/Temp/Test
you would get this error because kubectl cp sees the colon in the Windows path as the separator between a pod name and the path within the pod.
So it would see C:/Temp/Test as a pod named "C" with a path "/Temp/Test"
The way to get around this is to use a relative Windows path instead of an absolute path. It will need to be relative to your current working directory.
So if my current working directory is C:\Users\JoeBloggs and I wanted to copy down to C:\Temp\Test I'd need to use the command:
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:/home/azureuser/test.cap ../../Temp/Test
Note that this issue looks like it may be fixed soon. See https://github.com/kubernetes/kubernetes/pull/94165
Maybe someone could met this error
tar: removing leading '/' from member names
error: open .: is a directory
Which induced by the following command
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./default.ini
To solve it, we should add a destination folder per doc
kc cp -n monitoring <pod name>:/usr/share/grafana/conf/defaults.ini ./tmp/default.ini
This works for me:
kubectl cp "namespace"/"pod_name":"path_in_pod" "local_path"
Example:
kubectl cp mynamespace/mypod:var/www/html/index.html \Users\myuser\Desktop\index.html
I resolve this problem by set the source folder to be relative path.
If the file location is /home/azureuser/test.cap, and working dir is /home/azureuser/, the cmd is
kubectl cp aks-ssh2-6cd4948f6f-fp9tl:test.cap ./test.cap
If anyone uses windows pods, it may be hard to get files copied to the pods from local machine with those linux paths for kubectl cp command:
Procedure to copy files from local machine to kubernetes pod: (especially windows container)
I want to copy node.aspx from my local machine to
podname:\c:\inetpub\wwwroot
First upload Node.aspx to your cloud drive, path will be
/home/{your_username} in my case /home/pranesh
Then find out the pod name, in my case its
aspx-deployment-84597d88f5-pk5nh, follow below command
PS /home/pranesh> kubectl cp /home/pranesh/Node.aspx aspx-deployment-84597d88f5-pk5nh:/Node.aspx
This copies the file to c drive of container,
then move file from c drive to required path with powershell
PS /home/pranesh> kubectl exec aspx-deployment-84597d88f5-pk5nh powershell "Copy-Item "C:\Node.aspx" -Destination "C:\inetpub\wwwroot""
Use the reverse procedure for copying from container to cloud drive and download.
kubectl cp will not work if your container does not have tar command in the PATH. From your error it sees like tar command is not available on your container.
https://github.com/kubernetes/kubernetes/issues/58512
Please explore other options
Kubernetes gives a file not found error when user does not have permissions to a pod. That was my problem.
On my side the issue was with having multiple containers inside the pod:
kubectl cp -c grafana \
metrics/grafana-5c4f76b49b-p88lc:/etc/grafana \
./grafana/etc
so set the container name with -c grafana and properly prefix the namespace of the pod in my case metrics/
I tried this method on Azure and it worked:
kubectl cp 'POD NAME':xyz.json test
kubectl cp is the command for copying
POD NAME = vote-app
xyz.json is the file that needs to be copied from pod
the test is the file created in the drive of the AZURE directory...
So final command would be:
kubectl cp vote-app:xyz.json test
test will get generated in Azure Directory and later u can download test file from the download option of Azure
I couldn't get kubectl to work for this. Was getting error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This worked for me instead:
docker cp CONTAINERID:FILEWITHPATH DESTFILENAME
where "CONTAINERID" was retrieved by calling docker ps.
you need to mention the namespace in which your pod is available.

How to configure kubectl with cluster information from a .conf file?

I have an admin.conf file containing info about a cluster, so that the following command works fine:
kubectl --kubeconfig ./admin.conf get nodes
How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:
kubectl get nodes
Here are the official documentation for how to configure kubectl
http://kubernetes.io/docs/user-guide/kubeconfig-file/
You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config
The best way I've found was to use an environment variable:
export KUBECONFIG=/path/to/admin.conf
I just alias the kubectl command into separate ones for my dev and production environments via .bashrc
alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'
I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment
Before answers have been very solid and informative, I will try to add
my 2 cents here
Configure kubeconfig file knowing its precedence
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
For your example
kubectl get pods --kubeconfig=/path/to/admin.conf
#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods
#
# or:
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Although this precedence list not officially specified in the documentation it is codified here. If you’re developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.
ref article: https://ahmet.im/blog/mastering-kubeconfig/
I name all cluster configs as .kubeconfig and this lives in project directory.
Then in .bashrc or .bash_profile I have the following export:
export KUBECONFIG=.kubeconfig:$HOME/.kube/config
This way when I'm in the project directory kubectl will load local .kubeconfig.
Hope that helps
kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.
Because there is no built-in kubectl config merge command at the moment (follow this) you can add this function to your .bashrc (or .zshrc):
function kmerge() {
if [ $# -eq 0 ]
then
echo "Please pass the location of the kubeconfig you wish to merge"
fi
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
Then you can just run from termial:
kmerge /path/to/admin.conf
and the config file will be merged to ~/.kube/config.
You can now switch to the new context with:
kubectl config use-context <new-context-name>
Or if you're using kubectx (recommended) you can run: kubectx <new-context-name>.
(The kmerge function is based on #MichaelSp answer at this post).
Kubernetes keeps the path to search for config files in $KUBECONFIG
If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).
Just run the following each time you want to add a conf file to the KUBECONFIG path
export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf
You can check it worked by listing the available contexts
kubectl config get-contexts
Then select the one you want to use
kubectl config use-context <context-name>
Manage your config files proper,place below in your profile file, source the .profile / .bash_profile
for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config")
do
if [ -f "$kconfig" ];then
export KUBECONFIG=$KUBECONFIG:$kconfig
fi
done
switch the contexts from kubectl
When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?
alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'
This is possible:
export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3
and:
kubectl config use-context cluster0