Copy symlink to another location with ansible? - copy

I have my symlink and want to create new symlink, that points to the same location that first symlink points. I know I can use cp -d /path/to/symlink /new/path/to/symlink. But how do it with ansible module?
I was trying copy: src=/path/to/symlink dest=/new/path/to/symlink follow=yes but it makes a copy of symlink instead create new symlink. Any ideas?

You have two option here.
1) Create New Symlink using file module.
- name: Create symlink
file: src=/path/to/symlink dest=/new/path/to/symlink state=link
2) Run your working command to copy symlink using shell it will do the same.
- name: Create symlink
shell: cp -d /path/to/symlink /new/path/to/symlink

Related

Run a pod with tar and try to push file into the mount point

Our basic need is to check whether we are able to copy/push a file to a mountpoint or not. For this, I am advised to run a pod with tar and try to push file into the mount point. I have searched through the web and got the following commands:
-> kubectl cp [file-path] [pod-name]:/[path] (Although not giving any error but this command is not working and the file is not visible in the mentioned location.)
-> Verified the absence of file in the remote pod using the following command:
kubectl exec <pod_name> -- ls -la /
-> Found the below command that uses tar options but I don't want to exclude any file and hence not sure
how to proceed with this:
kubectl exec -n <some-namespace> <some-pod> -- tar cf - --exclude='pattern' /tmp/foo | tar xf - -C
/tmp/bar
-> Is there any other tar option that can help me in pushing the file to the mountpoint?
Also, the kubectl cp help command says that tar binary must be present for copy to work. Maybe this is the reason why I am unable to copy. But, I don't know how to check the tar binary's presence and how to get it if it's not there. Please help me with this.
I'm not sure why cp command not worked for you. However I tried to add a tar file inside the pod and it worked.
I used the following command:
kubectl cp ./<TAR FILE PATH> <NAMESPACE>/<POD NAME>:/<INSIDE POD PATH>
It's not best practice to add a file like this to a pod. You can also init a container or add the file during the build process of the docker image. You can also alternatively use a volume mount.

Copying from Windows to Linux without copying the entire path

I am using Cygwin to copy all the files in a local windows folder to an EC2 linux instance inside of powershell. When I attempt to copy all the files in a folder, it copies the pathname as a folder:
\cygwin64\bin\scp.exe -i "C:\cygwin64\home\Ken\ken-key-pair.pem" -vr \git\configs\configs_test\ ec2-user#ec2-22-75-189-18.compute-1.amazonaws.com:/var/www/html/temp4configs/
will copy the correct files, but incorrectly include the path in a Windows format a path like:
/var/www/html/temp4configs/\git\configs\configs_test/file.php
I have tried an asterisk after the folder without the -r, such as:
\cygwin64\bin\scp.exe -i "C:\cygwin64\home\Ken\ken-key-pair.pem" -v \git\configs\configs_test\* ec2-user#ec2-22-75-189-18.compute-1.amazonaws.com:/var/www/html/temp4configs/
but that will return an error such as
"gitconfigsconfigs_test*: No such file or directory"
What can I do to copy the files without copying the path?
Thanks
When using cygwin programs is safer to use POSIX Path, and most of the time is the only way. To covert from windows to posix PATH use cygpath
$ cygpath -u "C:\cygwin64\home\Ken\ken-key-pair.pem"
/home/Ken/ken-key-pair.pem
$ cygpath -u "C:\git\configs\configs_test\ "
/cygdrive/c/git/configs/configs_test/
Using the windows one, will cause the server to misunderstand the client request

How to change added torrent file location for transmission-daemon

The transmission-daemon stores by default some temporary files and the added torrent files in the following directory:
/var/lib/transmission-daemon/.config/transmission-daemon/
Is it possible to set another directory?
Why not just ln -s /var/lib/transmission-daemon/.config/transmission-daemon/ ~/MyAwesomeTransmissionConfigDirecotry or whatnot? ;-)
You'll just probably need sudo to ls it :)

Yocto post script to move two deployed image files into a folder

I'm trying to create a recipe that moves two image archives and places them into a directory inside Yocto's deploy directory /tmp/deploy/images. I have already created a new image that simply includes the other two recipes, however I haven't been able to utilize any of the available scripting functions to copy the generated images into a separate folder scheme. I've tried using do_install_append() to simply touch a new file, but to no avail and no warnings/errors are shown inside of the terminal during image creation.
Essentially, the workflow would be as follows inside of my-image.bb
....
require my-1st-image.bb
require my-2nd-image.bb
post_script(){
# rm -rf ${WORKDIR}/images/<machine>/USB
# mkdir ${WORKDIR}/images/<machine>/USB
# cp <my-1st-image.tar.gz> ${WORKDIR}/images/<machine>/USB
# cp <my-2nd-image.tar.gz> ${WORKDIR}/images/<machine>/USB
}
You need to use install
Try the following
post_script() {
install -d ${WORKDIR}/images/<machine>/USB
install -m 0755 <my-1st-image.tar.gz> ${WORKDIR}/images/<machine>/USB
install -m 755 <my-2nd-image.tar.gz> ${WORKDIR}/images/<machine>/USB
}
The first of the 3 install commands above will create a working directory. The other two will copy the files.

How to download a specific directory w/o the creation of the directory path from the root of the site to the specified directory

wget -r -np www.a.com/b/c/d
The above will create a directory called 'www.a.com' in the current working directory on my local computer, containing all subdirectories on the path to 'd'.
I only want directory 'd' (and its contents) created in my cwd.
How can I achieve this?
You can mention that directory name explicitely and avoid creation of sub-directories by the following line.
wget -nd -P /home/d www.a.com/b/c/d
The -nd will avoid creation of sub-directories and -P will set the directory to /home/d and all your files will be downloaded to "/home/d" folder only.