sed in script creating corrupt .tar.gz file - sed

I am installing a program which has a file "drown.bin" (Bourne shell script text executable).
When I execute this file as part of the process, it gives error
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error exit delayed from previous errors
Below are file contents (pasted only Bash portion, rest is machine language)
dir_tmp=/tmp/.$(date +"%y%m%d%H%M%S")
mkdir /$dir_tmp >/dev/null
sed -n -e '1,/^exit 0$/!p' $0 > "${dir_tmp}/.make-3000.tar.gz" 2>/dev/null
cd $dir_tmp >/dev/null
tar xvfz .make*.tar.gz >/dev/null
./.make
rm -rf $dir_tmp >/dev/null
exit 0
Can someone please advise what goes wrong in "sed" command to create a corrupted .tar.gz file. I already tried 3 systems with different CentOS versions.

It's not the sed command that fails, but the tar command.
This is a "self-extracting" tar file. The script that sits in front attempts to unpack the rest of the file, starting after the line exit 0. Likely the rest of the file is somehow corrupt.
If you downloaded it, try that again. If you copied it from somewhere else (especially FTP) make sure you used binary mode.
If that didn't work, what you could try to do:
copy the script-file to a file with extension .tgz, e.g. cp drown.bin mycopy.tgz
edit the copy, removing all script lines up to and including the exit 0 line. The file now should be a pure tar.gz file.
on the command line, do a tar tzvf mycopy.tgz to see the contents. Try tar xzvf mycopy.tgz to actually unpack. This likely will fail with just the same error you got first, but at least now you can see the extracted content, at least the part that didn't fail.

Related

access failed error - no such file while trying to move files

I am trying to move all the *.csv files to another folder on server but every time i get access failed error , I am able to get all the files to local server using mget but mv fails everytime , i can see the file on the server and got full permissions on the files, sh script is not working with wild characters. struck here with the simple command .
Download to local directory
localDir="/home/toor/UCDownloads/"
[ ! -d $localDir ] && mkdir -p $localDir
#sftp in the file directory to be downloaded
remoteDir="/share/CACHEDEV1_DATA/Lanein1/Unicard/"
#The file to be downloaded is fileName
lftp -u ${sftp_user},${password} sftp://${host}:${port}<<EOF
PS4='$LINENO: '
set xfer:log true
set xfer:log-file "$logfileUCARC"
set xfer:clobber true
set xfer:auto-rename true
debug 9
cd ${remoteDir}
lcd ${localDir}
#mget *.CSV
ls -l
mv "/share/CACHEDEV1_DATA/Lanein1/Unicard/"*.csv "/share/CACHEDEV1_DATA/Lanein1/Unicard/Archives/"
#rm /share/CACHEDEV1_DATA/Lanein1/Unicard/!(*.pdf)
bye
EOF
This is not a shell or Bash problem. It is a LFTP problem.
From the manual of LFTP:
mv file1 file2
Rename file1 to file2. No wildcard expansion is performed.
LFTP just does not support what you asking for. It will treat *.csv as a part of the file name.
See here for an alternative.

Perl using the -i option on a vboxsf share: Can't remove input_file Text file busy, skipping file

System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."

How to force wget to overwrite an existing file ignoring timestamp?

I tried '-N' and '--no-clobber' but the only result that I get is to retrieve a new copy of the existing example.exe with number a number added using this synax 'example.exe.1'. This is not what I'd like to get. I just need to download and overwrite the file example.exe in the same folder where I already saved a copy of example.com without that wget verifies if the mine is older or newer respect the on example.exe file already present in my download folder. Do you think is i possible or I need to create a script that delete the example.exe file or maybe something that change his modification date etc?
If you specify the output file using the -O option it will overwrite any existing file.
For example:
wget -O index.html bbc.co.uk
Run multiple times will keep over-writting index.html.
wget doesn't let you overwrite an existing file unless you explicitly name the output file on the command line with option -O.
I'm a bit lazy and I don't want to type the output file name on the command line when it is already known from the downloaded file. Therefore, I use curl like this:
curl -O http://ftp.vim.org/vim/runtime/spell/fr.utf-8.spl
Be careful when downloading files like this from unsafe sites. The above command will write a file named as the connected web site wishes to name it (inside the current directory though). The final name may be hidden through redirections and php scripts or be obfuscated in the URL. You might end up overwriting a file you don't want to overwrite.
And if you ever find a file named ls or any other enticing name in the current directory after using curl that way, refrain from executing the downloaded file. It may be a trojan downloaded from a rogue or corrupted web site!
wget --backups=1 google.com
renames original file with .1 suffix and writes new file to the intended filename.
Not exactly what was requested, but could be handy in some cases.
-c or --continue
From the manual:
If you use ā€˜-cā€™ on a non-empty file, and the server does not support
continued downloading, Wget will restart the download from scratch and
overwrite the existing file entirely.
I like the -c option. I started with the man page then the web but I've searched for this several times. Like if you're relaying a webcam so the image needs to always be named image.jpg. Seems like it should be more clear in the man page.
I've been using this for a couple years to download things in the background, sometimes combined with "limit-rate = " in my wgetrc file
while true
do
wget -c -i url.txt && break
echo "Restarting wget"
sleep 2
done
Make a little file called url.txt and paste the file's URL into it. Set this script up in your path or maybe as an alias and run it. It keeps retrying the download until there's no error. Sometimes at the end it gets into a loop displaying
416 Requested Range Not Satisfiable
The file is already fully retrieved; nothing to do.
but that's harmless, just ctrl-c it. I think it's always gotten the file I wanted even if wget runs out of retries or the connection temporarily goes away. I've downloaded things for days at a time with it. A CD image on dialup, yes, always with wget.
My use case involves two different URLs, sometimes the second one doesn't exist, but if it DOES exist, I want it to overwrite the first file.
The problem of using wget -O is that, when the second file DOESN'T exist, it will overwrite the first file with a BLANK file.
So the only way I could find is with an if statement:
--spider checks if a file exists, and returns 0 if it does
--quiet fail quietly, with no output
-nv is quiet, but still reports errors
wget -nv https://example.com/files/file01.png -O file01.png
# quietly check if a different version exists
wget --quiet --spider https://example.com/custom-files/file01.png
if [ $? -eq 0 ] ; then
# A different version exists, so download and overwrite the first
wget -nv https://example.com/custom-files/file01.png -O file01.png
fi
It's verbose, but I found it necessary. I hope this is helpful for someone.
Here is an easy way to get it done with parameter trimming
url=https://example.com/example.exe ; wget -nv $url -O ${url##*/}
Or you can use basename
url=https://example.com/example.exe ; wget -nv $url -O $( basename $url )
For those who do not want to use -O and want to specify the output directory only, the following command can be used.
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"
the first command will download from the source with the wget command
the second command will remove the older file
wget \
--directory-prefix "$dest" \
--backups 0 \
-- "$link"; \
rm '$file.1' -f;

Why can I save a .sh file as .txt and it still works when I run it

I have this in my myshellscript.txt:
#!/bin/sh
if [ -f $1 ]
then
cat $1
else
echo "Sorry, not found"
fi
Why is that even though it is a .txt file I can still run it using sh myshellscript.txt someotherfile.txt
Because you put a shebang (magic line) in the first line:
#!/bin/sh
This makes your shell know, it is a script it can run. At least if you made the file executable (chmod +x myshellscript.txt). UNIX does not care about file extensions as much as Windows does, so it does not depend on the file extension, whether a script is executable or not.

How to use the tar -I option

I'm trying to tar up all the *.class files only on a Solaris box under a certain directory.
Reading the man pages for tar made it seem like the -I option is what I wanted.
This is what I've tried from the dir in question:
find . -name "*.class" >> ~/includes.txt
tar cvf ~/classfiles.tar -I ~/includes.txt
From that I get:
tar: Removing leading `/' from member names
/home/myhomedir/includes.txt
And the ~/classfiles.tar files is garbage.
I don't have write permission on the dir where the *.class files are so I need to have the tar written to my home dir. Could someone tell me where I have gone wrong? What tar magic should I use?
Check which tar you are running. That message about removing the leading slash is a gtar (GNU tar) message, and the -I option you are trying to use is a Sun tar option (which lives in /bin/tar).
(at least the above is all true on my Solaris machine)