Copying from Windows to Linux without copying the entire path - powershell

I am using Cygwin to copy all the files in a local windows folder to an EC2 linux instance inside of powershell. When I attempt to copy all the files in a folder, it copies the pathname as a folder:
\cygwin64\bin\scp.exe -i "C:\cygwin64\home\Ken\ken-key-pair.pem" -vr \git\configs\configs_test\ ec2-user#ec2-22-75-189-18.compute-1.amazonaws.com:/var/www/html/temp4configs/
will copy the correct files, but incorrectly include the path in a Windows format a path like:
/var/www/html/temp4configs/\git\configs\configs_test/file.php
I have tried an asterisk after the folder without the -r, such as:
\cygwin64\bin\scp.exe -i "C:\cygwin64\home\Ken\ken-key-pair.pem" -v \git\configs\configs_test\* ec2-user#ec2-22-75-189-18.compute-1.amazonaws.com:/var/www/html/temp4configs/
but that will return an error such as
"gitconfigsconfigs_test*: No such file or directory"
What can I do to copy the files without copying the path?
Thanks

When using cygwin programs is safer to use POSIX Path, and most of the time is the only way. To covert from windows to posix PATH use cygpath
$ cygpath -u "C:\cygwin64\home\Ken\ken-key-pair.pem"
/home/Ken/ken-key-pair.pem
$ cygpath -u "C:\git\configs\configs_test\ "
/cygdrive/c/git/configs/configs_test/
Using the windows one, will cause the server to misunderstand the client request

Related

No such file or directory cygwin + rsyc

I am trying to send a file in cygwin commandline with rsync and openssh.
So the command I'm using is rsync -P -e "ssh -p 2222" deborahtrez#209.6.204.90:/home/dell/cygdrive/c/Users/dell/Videos/Movavi/installation_tutorial.mp4
Then it asks for my password, which I enter. I press ENTER and I always get returned No such file or directory
The file path to my video "installation_tutorial.mp4" is in C:\Users\dell\Videos\Movavi so the full path becomes C:\Users\dell\Videos\Movavi\installation_tutorial.mp4. So this is what I have tried so far.
Please, what am I doing wrong? Is it my file path? If so, what is the correct file path that i should be using? Help!
The path is wrong. The best way to convert from Windows to Posix is to use cygpath
$ cygpath -u 'C:\Users\dell\Videos\Movavi\installation_tutorial.mp4'
/cygdrive/c/Users/dell/Videos/Movavi/installation_tutorial.mp4

Set initdb and pg_ctl

I installed Postgresql from source.
I tried running the commands:
which initdb
which pg_ctl
but I get a blank response.
I know where these executables reside in my directory.
How might I set initdb and pg_ctl?
Thanks for your help.
You have received the blank output for command 'which', as these binaries are not known to your linux machine. And hence you will also need to run these binaries using './' (like ./pg_ctl)
You can add the path of your postgres bin directory to $PATH.
Eg.
export PATH=$PATH:/Postgres/Installation/Path/bin/
You can also set it permanently by adding the above path value in .bash_profile file in home of directory of Postgres user.

How to download a specific directory w/o the creation of the directory path from the root of the site to the specified directory

wget -r -np www.a.com/b/c/d
The above will create a directory called 'www.a.com' in the current working directory on my local computer, containing all subdirectories on the path to 'd'.
I only want directory 'd' (and its contents) created in my cwd.
How can I achieve this?
You can mention that directory name explicitely and avoid creation of sub-directories by the following line.
wget -nd -P /home/d www.a.com/b/c/d
The -nd will avoid creation of sub-directories and -P will set the directory to /home/d and all your files will be downloaded to "/home/d" folder only.

How do I copy symbolic links between servers?

We are moving web servers. (LAMP)
Webserver 1 has hundreds of symbolic links pointing to files in a different directory (eg. ../../files/001.png). When we move over to the new server (have downloaded the site to my computer then reuploading to Webserver 2 using SFTP client Transmit). It does not copy the symbolic links...
Is there a better way to get the symbolic links from one server to another apart from recreating them on the new server?
rsync -a from one server to another will preserve file attributes and symlinks.
rsync -av user#server:/path/to/source user#server2:/path/to/target
Something like the following? On Webserver 1:
tar czf - the_directory | (ssh Webserver2 "cd /path/to/wherever && tar xzf -")
This creates a tar of the stuff to copy to STDOUT, and pipes it to an untar on the other server over ssh. It can be faster than a recursive ssh copy too.

How to specify the download location with wget?

I need files to be downloaded to /tmp/cron_test/. My wget code is
wget --random-wait -r -p -nd -e robots=off -A".pdf" -U mozilla http://math.stanford.edu/undergrad/
So is there some parameter to specify the directory?
From the manual page:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the
directory where all other files and sub-directories will be
saved to, i.e. the top of the retrieval tree. The default
is . (the current directory).
So you need to add -P /tmp/cron_test/ (short form) or --directory-prefix=/tmp/cron_test/ (long form) to your command. Also note that if the directory does not exist it will get created.
-O is the option to specify the path of the file you want to download to:
wget <uri> -O /path/to/file.ext
-P is prefix where it will download the file in the directory:
wget <uri> -P /path/to/folder
Make sure you have the URL correct for whatever you are downloading. First of all, URLs with characters like ? and such cannot be parsed and resolved. This will confuse the cmd line and accept any characters that aren't resolved into the source URL name as the file name you are downloading into.
For example:
wget "sourceforge.net/projects/ebosse/files/latest/download?source=typ_redirect"
will download into a file named, ?source=typ_redirect.
As you can see, knowing a thing or two about URLs helps to understand wget.
I am booting from a hirens disk and only had Linux 2.6.1 as a resource (import os is unavailable). The correct syntax that solved my problem downloading an ISO onto the physical hard drive was:
wget "(source url)" -O (directory where HD was mounted)/isofile.iso"
One could figure the correct URL by finding at what point wget downloads into a file named index.html (the default file), and has the correct size/other attributes of the file you need shown by the following command:
wget "(source url)"
Once that URL and source file is correct and it is downloading into index.html, you can stop the download (ctrl + z) and change the output file by using:
-O "<specified download directory>/filename.extension"
after the source url.
In my case this results in downloading an ISO and storing it as a binary file under isofile.iso, which hopefully mounts.
"-P" is the right option, please read on for more related information:
wget -nd -np -P /dest/dir --recursive http://url/dir1/dir2
Relevant snippets from man pages for convenience:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
man wget:
-O file
--output-document=file
wget "url" -O /tmp/cron_test/<file>