FTP zip script - STUCK - iphone

I have a bash code to backup my iOS files and send them to my website FTP in the directory: (http://mywebsite.com/sms) but when I run this code, it isn't .zip'ing the files and leaves the file 'zippyy.db' in the root of my website, not in the /sms folder.
I will be running this script from a few devices so when I execute the code, if there is already a file in the FTP called zippyy.zip, it will change it to zippyy1.zip, zippyy2.zip etc..
I would be really grateful for somebody to re-write the script for me. Thank you in advance! Here's my code:
#!/bin/bash
ROOTFOLDER="/var/root"
ZIPNAME="zipfolder"
ZIPFOLDER=$ROOTFOLDER/$ZIPNAME
LIBFOLDER="/var/mobile/Library"
ZIPFILE="zippyy.zip"
mkdir -p $ZIPFOLDER
cp $LIBFOLDER/SMS/sms.db $ZIPFOLDER/
cp $LIBFOLDER/Notes/notes.sqlite $ZIPFOLDER/
cp $LIBFOLDER/Safari/Bookmarks.db $ZIPFOLDER/
cp $LIBFOLDER/Safari/History.plist $ZIPFOLDER/
cd $ROOTFOLDER
zip -r $ZIPFILE $ZIPNAME
HOST=HOSTNAME
USER=USERNAME
PASS=PASSWORD
ftp -inv $HOST << EOF
user $USER $PASS
cd sms
dir . remote_dir.txt
bye
EOF
FILECOUNT=$(grep zippyy remote_dir.txt | wc -l)
NEXTDB="zippyy${FILECOUNT}.db"
mv $ZIPFILE $NEXTDB
ftp -inv $HOST << EOF
user $USER $PASS
put $NEXTDB
bye
EOF

You mean your archive is corrupt once it's been ftp'd?
Its likely your sending the file in default mode on your machine, which must be ASCII mode.
But first, on you local copy of zip file, issue the test option
zip -t $ZIPFILE
If that succeeds, then change you ftp here-doc to
ftp -inv $HOST << EOF
user $USER $PASS
binary
put $NEXTDB
bye
EOF
Note the addition of the ftp command binary, which means send file without translations for ASCII.
It's highly recommended to issue the following command
man ftp
And read through it at least once. Granted there are sections of even a good ftp man page that I have failed to find useful! ;-) . Also be aware that there are many ftp clients, with only a semblance of adherence to a common set of options, parameters and sub-commands. Don't assume that once you get it working at home, that it will work at the office, or at your friends place!
IHTH

Related

access failed error - no such file while trying to move files

I am trying to move all the *.csv files to another folder on server but every time i get access failed error , I am able to get all the files to local server using mget but mv fails everytime , i can see the file on the server and got full permissions on the files, sh script is not working with wild characters. struck here with the simple command .
Download to local directory
localDir="/home/toor/UCDownloads/"
[ ! -d $localDir ] && mkdir -p $localDir
#sftp in the file directory to be downloaded
remoteDir="/share/CACHEDEV1_DATA/Lanein1/Unicard/"
#The file to be downloaded is fileName
lftp -u ${sftp_user},${password} sftp://${host}:${port}<<EOF
PS4='$LINENO: '
set xfer:log true
set xfer:log-file "$logfileUCARC"
set xfer:clobber true
set xfer:auto-rename true
debug 9
cd ${remoteDir}
lcd ${localDir}
#mget *.CSV
ls -l
mv "/share/CACHEDEV1_DATA/Lanein1/Unicard/"*.csv "/share/CACHEDEV1_DATA/Lanein1/Unicard/Archives/"
#rm /share/CACHEDEV1_DATA/Lanein1/Unicard/!(*.pdf)
bye
EOF
This is not a shell or Bash problem. It is a LFTP problem.
From the manual of LFTP:
mv file1 file2
Rename file1 to file2. No wildcard expansion is performed.
LFTP just does not support what you asking for. It will treat *.csv as a part of the file name.
See here for an alternative.

Perl using the -i option on a vboxsf share: Can't remove input_file Text file busy, skipping file

System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."

How to force wget to overwrite an existing file ignoring timestamp?

I tried '-N' and '--no-clobber' but the only result that I get is to retrieve a new copy of the existing example.exe with number a number added using this synax 'example.exe.1'. This is not what I'd like to get. I just need to download and overwrite the file example.exe in the same folder where I already saved a copy of example.com without that wget verifies if the mine is older or newer respect the on example.exe file already present in my download folder. Do you think is i possible or I need to create a script that delete the example.exe file or maybe something that change his modification date etc?
If you specify the output file using the -O option it will overwrite any existing file.
For example:
wget -O index.html bbc.co.uk
Run multiple times will keep over-writting index.html.
wget doesn't let you overwrite an existing file unless you explicitly name the output file on the command line with option -O.
I'm a bit lazy and I don't want to type the output file name on the command line when it is already known from the downloaded file. Therefore, I use curl like this:
curl -O http://ftp.vim.org/vim/runtime/spell/fr.utf-8.spl
Be careful when downloading files like this from unsafe sites. The above command will write a file named as the connected web site wishes to name it (inside the current directory though). The final name may be hidden through redirections and php scripts or be obfuscated in the URL. You might end up overwriting a file you don't want to overwrite.
And if you ever find a file named ls or any other enticing name in the current directory after using curl that way, refrain from executing the downloaded file. It may be a trojan downloaded from a rogue or corrupted web site!
wget --backups=1 google.com
renames original file with .1 suffix and writes new file to the intended filename.
Not exactly what was requested, but could be handy in some cases.
-c or --continue
From the manual:
If you use ā€˜-cā€™ on a non-empty file, and the server does not support
continued downloading, Wget will restart the download from scratch and
overwrite the existing file entirely.
I like the -c option. I started with the man page then the web but I've searched for this several times. Like if you're relaying a webcam so the image needs to always be named image.jpg. Seems like it should be more clear in the man page.
I've been using this for a couple years to download things in the background, sometimes combined with "limit-rate = " in my wgetrc file
while true
do
wget -c -i url.txt && break
echo "Restarting wget"
sleep 2
done
Make a little file called url.txt and paste the file's URL into it. Set this script up in your path or maybe as an alias and run it. It keeps retrying the download until there's no error. Sometimes at the end it gets into a loop displaying
416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
but that's harmless, just ctrl-c it. I think it's always gotten the file I wanted even if wget runs out of retries or the connection temporarily goes away. I've downloaded things for days at a time with it. A CD image on dialup, yes, always with wget.
My use case involves two different URLs, sometimes the second one doesn't exist, but if it DOES exist, I want it to overwrite the first file.
The problem of using wget -O is that, when the second file DOESN'T exist, it will overwrite the first file with a BLANK file.
So the only way I could find is with an if statement:
--spider checks if a file exists, and returns 0 if it does
--quiet fail quietly, with no output
-nv is quiet, but still reports errors
wget -nv https://example.com/files/file01.png -O file01.png
# quietly check if a different version exists
wget --quiet --spider https://example.com/custom-files/file01.png
if [ $? -eq 0 ] ; then
# A different version exists, so download and overwrite the first
wget -nv https://example.com/custom-files/file01.png -O file01.png
fi
It's verbose, but I found it necessary. I hope this is helpful for someone.
Here is an easy way to get it done with parameter trimming
url=https://example.com/example.exe ; wget -nv $url -O ${url##*/}
Or you can use basename
url=https://example.com/example.exe ; wget -nv $url -O $( basename $url )
For those who do not want to use -O and want to specify the output directory only, the following command can be used.
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"
the first command will download from the source with the wget command
the second command will remove the older file
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"; \
rm '$file.1' -f;

how to send an email alert if a folder is modified

I want to get an email alert if a certain folder is modified but how do I pipe the output from the command so it sends an email not just show the changes to the folder in the terminal?
something like the following but... gives an error on the email part
inotifywait -m /home/tom -e create -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
| /usr/bin/Mail -s "notify" "email#12345mail.com"
done
Could it be you simply missed the semicolon before done?
This line works for me (note I also used mutt instead of Mail):
inotifywait -m /home/tom -e create -e moved_to | while read path action file; do echo "The file '$file' appeared in directory '$path' via '$action'" | /usr/bin/mutt -s "notify" "email#12345.com" ;done

Perl Program to search for a string over a set of files over SSH

I have a perl script which can be used to ssh into a remote server using Net::SSH2, i need to search for a particular string over the files in a given directory in the remote system and print the given files in which the string occurs . Any ideas /sample codes on how i can go about this ?
Thanks
Solution proposed and accepted in the comments:
ssh <user>#<host> grep -d recurse -l <string> <directories>