I am trying to move all the *.csv files to another folder on server but every time i get access failed error , I am able to get all the files to local server using mget but mv fails everytime , i can see the file on the server and got full permissions on the files, sh script is not working with wild characters. struck here with the simple command .
Download to local directory
localDir="/home/toor/UCDownloads/"
[ ! -d $localDir ] && mkdir -p $localDir
#sftp in the file directory to be downloaded
remoteDir="/share/CACHEDEV1_DATA/Lanein1/Unicard/"
#The file to be downloaded is fileName
lftp -u ${sftp_user},${password} sftp://${host}:${port}<<EOF
PS4='$LINENO: '
set xfer:log true
set xfer:log-file "$logfileUCARC"
set xfer:clobber true
set xfer:auto-rename true
debug 9
cd ${remoteDir}
lcd ${localDir}
#mget *.CSV
ls -l
mv "/share/CACHEDEV1_DATA/Lanein1/Unicard/"*.csv "/share/CACHEDEV1_DATA/Lanein1/Unicard/Archives/"
#rm /share/CACHEDEV1_DATA/Lanein1/Unicard/!(*.pdf)
bye
EOF
This is not a shell or Bash problem. It is a LFTP problem.
From the manual of LFTP:
mv file1 file2
Rename file1 to file2. No wildcard expansion is performed.
LFTP just does not support what you asking for. It will treat *.csv as a part of the file name.
See here for an alternative.
Related
System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."
I want to download an entire website using the wget -r command and change the name of the file.
I have tried with:
wget -r -o doc.txt "http....
hoping that the OS would have automatically create file in order like doc1.txt doc2.txt but It actually save the stream of the stdout in that file.
Is there any way to do this with just one command?
Thanks!
-r tells wget to recursively get resources from a host.
-o file saves log messages to file instead of the standard error. I think that is not what you are looking for, I think it is -O file.
-O file stores the resource(s) in the given file, instead of creating a file in the current directory with the name of the resource. If used in conjunction with -r, it causes wget to store all resources concatenated to that file.
Since wget -r downloads and stores more than one file, recreating the server file tree in the local system, it has no sense to indicate the name of one file to store.
If what you want is to rename all downloaded files to match the pattern docX.txt, you can do it with a different command after wget has end:
wget -r http....
i=1
while read file
do
mv "$file" "$(dirname "$file")/doc$i.txt"
i=$(( $i + 1 ))
done < <(find . -type f)
I tried to deploy my personal blog website to my remote server recently. When I tried to move a few files and directories to another place by executing mv, some unexpected errors happened. The command line echoed "Directory not Empty". After doing some googling, I tried again with '-f' switch or '-v', the same result showed.
I logged in on the root account, and the process is here:
root#danielpan:~# shopt -s dotglob
root#danielpan:~# mv /var/www/html/wordpress/* /var/www/html
mv: cannot move `/var/www/html/wordpress/wp-content` to `/var/www/html/wp-content`:
Directory not empty
root#danielpan:~# mv -f /var/www/html/wordpress/* /var/www/html
mv: cannot move `/var/www/html/wordpress/wp-content` to `/var/www/html/wp-content`:
Directory not empty
Anybody know why?
(I'm running Ubuntu 14.04)
If You have sub-directories and "mv" is not working:
cp -R source/* destination/
rm -R source/
I found the solution finally. Because the /var/www/html/wp-content already exists, then when you try to copy /var/www/html/wordpress/wp-content there, error of Directory not Empty happens. So you need to copy /var/www/html/wordpress/wp-content/* to /var/www/html/wp-content.
Just execute this:
mv /var/www/html/wordpress/wp-content/* /var/www/html/wp-content
rmdir /var/www/html/wordpress/wp-content
rmdir /var/www/html/wordpress
Instead of copying directories by cp or rsync, I prefer
cd ${source_path}
find . -type d -exec mkdir -p ${destination_path}/{} \;
find . -type f -exec mv {} ${destination_path}/{} \;
cd $oldpwd
moves files (actually renames them) and overwrites existing ones. So it's fast enough.
But when ${source_path} contains empty subfolders you can cleanup by rm -rf ${source_path}
I am installing a program which has a file "drown.bin" (Bourne shell script text executable).
When I execute this file as part of the process, it gives error
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error exit delayed from previous errors
Below are file contents (pasted only Bash portion, rest is machine language)
dir_tmp=/tmp/.$(date +"%y%m%d%H%M%S")
mkdir /$dir_tmp >/dev/null
sed -n -e '1,/^exit 0$/!p' $0 > "${dir_tmp}/.make-3000.tar.gz" 2>/dev/null
cd $dir_tmp >/dev/null
tar xvfz .make*.tar.gz >/dev/null
./.make
rm -rf $dir_tmp >/dev/null
exit 0
Can someone please advise what goes wrong in "sed" command to create a corrupted .tar.gz file. I already tried 3 systems with different CentOS versions.
It's not the sed command that fails, but the tar command.
This is a "self-extracting" tar file. The script that sits in front attempts to unpack the rest of the file, starting after the line exit 0. Likely the rest of the file is somehow corrupt.
If you downloaded it, try that again. If you copied it from somewhere else (especially FTP) make sure you used binary mode.
If that didn't work, what you could try to do:
copy the script-file to a file with extension .tgz, e.g. cp drown.bin mycopy.tgz
edit the copy, removing all script lines up to and including the exit 0 line. The file now should be a pure tar.gz file.
on the command line, do a tar tzvf mycopy.tgz to see the contents. Try tar xzvf mycopy.tgz to actually unpack. This likely will fail with just the same error you got first, but at least now you can see the extracted content, at least the part that didn't fail.
I have a bash code to backup my iOS files and send them to my website FTP in the directory: (http://mywebsite.com/sms) but when I run this code, it isn't .zip'ing the files and leaves the file 'zippyy.db' in the root of my website, not in the /sms folder.
I will be running this script from a few devices so when I execute the code, if there is already a file in the FTP called zippyy.zip, it will change it to zippyy1.zip, zippyy2.zip etc..
I would be really grateful for somebody to re-write the script for me. Thank you in advance! Here's my code:
#!/bin/bash
ROOTFOLDER="/var/root"
ZIPNAME="zipfolder"
ZIPFOLDER=$ROOTFOLDER/$ZIPNAME
LIBFOLDER="/var/mobile/Library"
ZIPFILE="zippyy.zip"
mkdir -p $ZIPFOLDER
cp $LIBFOLDER/SMS/sms.db $ZIPFOLDER/
cp $LIBFOLDER/Notes/notes.sqlite $ZIPFOLDER/
cp $LIBFOLDER/Safari/Bookmarks.db $ZIPFOLDER/
cp $LIBFOLDER/Safari/History.plist $ZIPFOLDER/
cd $ROOTFOLDER
zip -r $ZIPFILE $ZIPNAME
HOST=HOSTNAME
USER=USERNAME
PASS=PASSWORD
ftp -inv $HOST << EOF
user $USER $PASS
cd sms
dir . remote_dir.txt
bye
EOF
FILECOUNT=$(grep zippyy remote_dir.txt | wc -l)
NEXTDB="zippyy${FILECOUNT}.db"
mv $ZIPFILE $NEXTDB
ftp -inv $HOST << EOF
user $USER $PASS
put $NEXTDB
bye
EOF
You mean your archive is corrupt once it's been ftp'd?
Its likely your sending the file in default mode on your machine, which must be ASCII mode.
But first, on you local copy of zip file, issue the test option
zip -t $ZIPFILE
If that succeeds, then change you ftp here-doc to
ftp -inv $HOST << EOF
user $USER $PASS
binary
put $NEXTDB
bye
EOF
Note the addition of the ftp command binary, which means send file without translations for ASCII.
It's highly recommended to issue the following command
man ftp
And read through it at least once. Granted there are sections of even a good ftp man page that I have failed to find useful! ;-) . Also be aware that there are many ftp clients, with only a semblance of adherence to a common set of options, parameters and sub-commands. Don't assume that once you get it working at home, that it will work at the office, or at your friends place!
IHTH