Keep original documents' dates with PSFTP - date

I have downloaded some files with PSFTP from a SQL Server. The problem is that PSFTP changes the dates of creation/update and last modified of the files when downloading them in a local folder. For me it is important to keep the original dates. Is there any command to set/change it? Thanks
This is the script of the batch file
psftp.exe user#host -i xxx.ppk -b abc.scr
This is the scriptof the SCR file
cd /path remote folder
lcd path local folder
mget *.csv
exit

I'm not familiar with PSFTP and after looking at the docs I don't see any option to do this. However, you can use the -p flag of pscp to preserve dates and times.
See docs here.
(note it's a lower-case p, the other case is for specifying the port)

Related

Compress folder to .WAR file using PowerShell

I have a folder on my D drive (D://MyFolder), which I want to compress into a .WAR file (D://MyFolder.war).
I am trying to automate a deployment process using PowerShell, so I am looking for a PowerShell (or MS command line) command to do this.
I've tried to google and scourge StackOverflow, but haven't been able to find anything yet. This is my first 'PowerShell Adventure', so I'm not entirely sure if/how I can do this?
Many thanks for your help.
What about simple (if you do not have JAVA_HOME which you can check with env | sls JAVA_HOME):
cd D:\MyFolder
& "<path_to_your_java>\bin\java.exe" -cvf my_folder.war *
java options:
-c create new archive
-v generate verbose output on standard output
-f specify archive file name

AWS S3, Deleting files from local directory after upload

I have backup files in different directories in one drive. Files in those directories can be quite big up to 800GB or so. So I have a batch file with a set of scripts which upload/syncs files to S3.
See example below:
aws s3 sync R:\DB_Backups3\System s3://usa-daily/System/ --exclude "*" --include "*/*/Diff/*"
The upload time can vary but so far so good.
My question is, how do I edit the script or create a new one which checks in the s3 bucket that the files have been uploaded and ONLY if they have been uploaded then deleted them from the local drive, if not leave them on the drive?
(Ideally it would check each file)
I'm not familiar with aws s3, or aws cli command that can do that? Please let me know if I made myself clear or if you need more details.
Any help will be very appreciated.
Best would be to use mv with --recursive parameter for multiple files
When passed with the parameter --recursive, the following mv command recursively moves all files under a specified directory to a specified bucket and prefix while excluding some files by using an --exclude parameter. In this example, the directory myDir has the files test1.txt and test2.jpg:
aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output:
move: myDir/test1.txt to s3://mybucket2/test1.txt
Hope this helps.
As the answer by #ketan shows, Amazon aws client cannot do batch move.
You can use WinSCP put -delete command instead:
winscp.com /log=S3.log /ini=nul /command ^
"open s3://S3KEY:S3SECRET#s3.amazonaws.com/" ^
"put -delete C:\local\path\* /bucket/" ^
"exit"
You need to URL-encode special characters in the credentials. WinSCP GUI can generate an S3 script template, like the one above, for you.
Alternatively, since WinSCP 5.19, you can use -username and -password switches, which do not need any encoding:
"open s3://s3.amazonaws.com/ -username=S3KEY -password=S3SECRET" ^
(I'm the author of WinSCP)

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

Gathering Files from multiple computers into one

I am trying to gather files/folders from multiple computers in my network into one centralized folder in the command console (this is the name of the pseudo server for this set of computers)
Basically, what i need is to collect a certain file from all the computers connected to my network and back it up in the console.
Example:
* data.txt // this is the file that i need to back up and its located in all the computers in the same location
* \console\users\administrator\desktop\backup\%computername% // i need each computer to create a folder with its computer name into the command console's desktop so i can keep track of which files belongs to which computer
I was trying to use psexec to do this using the following code:
psexec #cart.txt -u administrator -p <password> cmd /c (^net use \\console /USER:administrator <password> ^& mkdir \\console\users\Administrator\Desktop\backup\%computername% ^& copy c:\data.txt \\console\USERS\Administrator\DESKTOP\backup\%computername%\)
any other suggestions since im having trouble with this command
Just use the command copy must easy.
take a look:
for /F %%a in (computerslist.txt) do (
copy \\%%a\c$\users\administrator\desktop\%%a\*.txt c:\mycollecteddata\%%a
)
that will copy all files *.txt for all computers that are on computereslist.txt; the copy will be with the current credentials. Save the code on a file *.cmd and execute with the right user, you can create a scheduled taks to start with a user thant is commom for all computers.
Good work.

compare file size after ftp get with the original file on server

In SQL I'm using xp_cmdShell to run FTP commands. I have no problem getting the list of files or copying files to the local server, but I want to compare copied file size to the original to make sure the get has been successful.
Any ideas on how to compare file sizes?
From a command prompt you can use the DOS File Compare command (fc). In your case you probably want to do a binary compare (there is no file size compare). I binary compare should work in your case.
Most DOS commands will return some code that let s you know the status.
http://www.computerhope.com/fchlp.htm
EDIT
Sorry, I read your question and realized you want to compare it against a file on the ftp server. I think this is a moot point since if ftp reports a successful file transfer there is no reason to compare (unless your source of comparison for not the ftp site). Does that make sense?
What you could do it use the FTP command ls command.
ftp> ls <filename>
where ftp> is the ftp prompt and not part of the command. This command gives you the file size in bytes. Then you need to use the dos command for the local file. Here is a StackOverflow question (and answer) about that.
Windows command for file size only?