Gathering Files from multiple computers into one - copy

I am trying to gather files/folders from multiple computers in my network into one centralized folder in the command console (this is the name of the pseudo server for this set of computers)
Basically, what i need is to collect a certain file from all the computers connected to my network and back it up in the console.
Example:
* data.txt // this is the file that i need to back up and its located in all the computers in the same location
* \console\users\administrator\desktop\backup\%computername% // i need each computer to create a folder with its computer name into the command console's desktop so i can keep track of which files belongs to which computer
I was trying to use psexec to do this using the following code:
psexec #cart.txt -u administrator -p <password> cmd /c (^net use \\console /USER:administrator <password> ^& mkdir \\console\users\Administrator\Desktop\backup\%computername% ^& copy c:\data.txt \\console\USERS\Administrator\DESKTOP\backup\%computername%\)
any other suggestions since im having trouble with this command

Just use the command copy must easy.
take a look:
for /F %%a in (computerslist.txt) do (
copy \\%%a\c$\users\administrator\desktop\%%a\*.txt c:\mycollecteddata\%%a
)
that will copy all files *.txt for all computers that are on computereslist.txt; the copy will be with the current credentials. Save the code on a file *.cmd and execute with the right user, you can create a scheduled taks to start with a user thant is commom for all computers.
Good work.

Related

Using gsutil rsync to copy new files only

I have this little script running on a windows machine that syncs down files from google storage bucket and sends the new files to a printer hot folder. The file disappears from printers hot directory as soon as it is printed. As you can see the script uses a backup directory to compare each and every file to make sure only new files are sent to the printer. This solution works well, but clearly not very efficient for large volume of files. Just wondering if rsync has options to copy only new files from bucket since the last run.
#echo off
SET SOURCE=%1
SET DESTINATION=%2
SET HOTDIR=%3
SET BACKUPDIR=%4
SET GSUTIL_INST_DIR=C:\PRINT
SET PARENT_DIR=C:\PRINT
SET GSUTIL=%GSUTIL_INST_DIR%\google-cloud-sdk-386.0.0-windows-x86_64-bundled-python\google-cloud-sdk\bin\gsutil
SET GSUTIL=%GSUTIL% -m
call %GSUTIL% rsync -d -C %SOURCE%/ %DESTINATION%/
:: Compare each file before sending it to hot directory
for /f %%F in ('dir /b "%DESTINATION%"') do (
if not exist "%BACKUPDIR%\%%F" (
XCOPY /Y /F "%DESTINATION%\%%F" "%HOTDIR%\%%F*"
)
)
robocopy %DESTINATION% %BACKUPDIR% /MIR
SET CONFIRMATION= "%DATE% %TIME% %SOURCE% to %HOTDIR%"
Just wondering if rsync has options to copy only new files from bucket
since the last run.
Below solution may work for your scenario:
Maintain last script run timestamp history in some local file
List the GCS objects(files) in sorted order by date
gsutil ls -l gs://your-bucket/ | sort -k 2
The above command will give the timestamp of the objects added, so you can pipe the output and filter the object list based on the timestamp stored in history file.

Postgresql Auto backup using powershell and task scheduler

I am running attached script to backup postgresql database by using task scheduler. Script is executed successfully but backup is not happening. Same script i have run on powershell and it's working fine.enter image description here
I want to schedule daily backup on windows server. Please help me or suggest any alternative to automate the backup.
Regards
Sadam
Try this script, it will create a backup file with a name consisting of a timestamp:
First, create a backup.bat file and just run it (set your credentials and database name):
echo off
echo 'Generate backup file name'
set CUR_YYYY=%date:~10,4%
set CUR_MM=%date:~4,2%
set CUR_DD=%date:~7,2%
set CUR_HH=%time:~0,2%
if %CUR_HH% lss 10 (set CUR_HH=0%time:~1,1%)
set CUR_NN=%time:~3,2%
set CUR_SS=%time:~6,2%
set BACKUP_FILE=%CUR_YYYY%-%CUR_MM%-%CUR_DD%_%CUR_HH%-%CUR_NN%-%CUR_SS%.custom.backup
echo 'Backup path: %BACKUP_FILE%'
echo 'Creating a backup ...'
set PGPASSWORD=strongpas
pg_dump.exe --username="postgres" -d AdventureWorks --format=custom -f "%BACKUP_FILE%"
echo 'Backup successfully created: %BACKUP_FILE%'
As a result, you should see such a picture, and a file with the extension .custom.backup should appear in the directory
If you get an error that the executable file "pg_dump.exe" was not found, then specify the full path to this file, something like "C:\Program Files\PostgreSQL\12\bin\pg_dump.exe", or add a directory with binaries PostgreSQL to environment variables (more details here)
-
To schedule regular execution, you can use the windows task scheduler
Press win + r and enter taskschd.msc
Select Create Basic Task
Then follow the steps of the wizard, specify the schedule, and in the "Action" section, specify "Start a program" and then the path to the backup.bat file
To check that everything is correct, find the task in the list and select Run
You can read more about Postgresql backups on Windows here

Keep original documents' dates with PSFTP

I have downloaded some files with PSFTP from a SQL Server. The problem is that PSFTP changes the dates of creation/update and last modified of the files when downloading them in a local folder. For me it is important to keep the original dates. Is there any command to set/change it? Thanks
This is the script of the batch file
psftp.exe user#host -i xxx.ppk -b abc.scr
This is the scriptof the SCR file
cd /path remote folder
lcd path local folder
mget *.csv
exit
I'm not familiar with PSFTP and after looking at the docs I don't see any option to do this. However, you can use the -p flag of pscp to preserve dates and times.
See docs here.
(note it's a lower-case p, the other case is for specifying the port)

Command to close Shared Folder session of Other client machine

How we can delete particular folders and sub-folder that were open by other user in network share when other client machine open the share path having the write access to that folder.. psfile and net files command will close the file session of client machine but again after few seconds(1-2) it will create automatically another session for that user in server...
do we have any batch command to permanently kill the session of all connection in a particular shared path?
Following is the command i use in net files but this shows the process will create again within 2 seconds after killing it.
for /f "skip=4 tokens=1" %a in ('net files') do net files %a /close
the above command will close the session but immediately after 1-2 seconds another session will be created
Try this,
openfiles /Disconnect /s host /OP "path\to\file.exe" /ID * /u "username" /p "password".
Explain comand:
openfiles /disconnect
Enables an administrator to disconnect files and folders that have been opened remotely through a shared folder.
/disconnect -> to close the file.
/s -> host from file, exemple: 192.168.1.10.
/op -> full path of file, exemple: "C:\projects\file.exe" .
/id -> id of process, when you pass "*" is same than all ids.
/u -> user with permission to close file in remote server.
/p -> password from user.

MongoDB script to backup replication set locally to a Windows Server

I would like to make a Daily Backup of my MongoDB from a replication set running from Windows 2012 servers.
End goal would be to get a daily backup and write the backup to a remote or local share - Windows.
Can I batch the mongodump command?
Any help would be greatly appreciated!!
Sorry, it's a bit late but the following seems to work OK for me. The script dumps the database and compresses the output using 7-Zip.
1) Create backup script (backup.bat)
#echo off
REM move into the backups directory
CD C:\database_backups
REM Create a file name for the database output which contains the date and time. Replace any characters which might cause an issue.
set filename=database %date% %time%
set filename=%filename:/=-%
set filename=%filename: =__%
set filename=%filename:.=_%
set filename=%filename::=-%
REM Export the database
echo Running backup "%filename%"
C:\mongodb\mongodump --out %filename%
REM ZIP the backup directory
echo Running backup "%filename%"
"c:\Program Files\7-Zip\7z.exe" a -tzip "%filename%.zip" "%filename%"
REM Delete the backup directory (leave the ZIP file). The /q tag makes sure we don't get prompted for questions
echo Deleting original backup directory "%filename%"
rmdir "%filename%" /s /q
echo BACKUP COMPLETE
2) Schedule the backup
Open Computer Management
Go to Task Scheduler and select Create Task.
On the General tab, enter a description and select Run whether user is logged on or not if you want the task to run at night.
On the Triggers tab, select when you would like the task to run.
On the Actions tab, create a new action which points at your batch script.
I'm running on linux, not Windows 2012, but here is what I do. On one of the servers in the replica set, this script gets run every night via a cron job.
#config
BACKUPNAME=[backup file name]
DATAPATH=[path to mongo data folder]
DATESTAMP=$(date +"%Y-%m-%d")
FILENAME=backup.$BACKUPNAME.$DATESTAMP.tar.gz
TARPATH=$DATAPATH/$FILENAME
echo $DATESTAMP;
/etc/init.d/mongod stop
/usr/bin/mongodump --journal --dbpath $DATAPATH --out $DATAPATH/backup
tar czvf $TARPATH $DATAPATH/backup
rm -rf $DATAPATH/backup
/usr/bin/s3cmd put $TARPATH s3://[backup s3 bucket name]/$FILENAME
rm -f $TARPATH
/etc/init.d/mongod start
/scripts/prunebackups
I'm using s3cmd to send files to an S3 bucket on Amazon AWS, but you could just as easily copy the file anywhere. prunebackups is a script that deletes old backups from S3 based on how old they are.
On Windows I'd create a batch file that does similar tasks. In essence:
Stop mongod
run mongodump to generate the data
zip up the dumped data and move it somewhere
clean up files
start mongod again
You can then use Task Scheduler to run it periodically.
If you have other mongod instances in the replica set, you shouldn't run into any issues with downtime. The backup instance in my setup is never used for reads or writes, but only for backups and in case one of the other instances goes down.
MongoDB has documentation on different backup strategies: http://docs.mongodb.org/manual/administration/backup/
We chose the mongodump approach because for us it's cheaper to store dump backups than snapshots. This is the specific strategy we used: http://docs.mongodb.org/manual/tutorial/backup-databases-with-binary-database-dumps/. Good news is, we actually had to restore data from a backup to production once and it was pretty painless.