Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to download a big file from a normal http-link to an ftp-server (under ubuntu) without storing the file locally (as my local storage is too small).
Do you have any ideas how to do this with wget or a small perl-script? (I don't have sudo-rights on the local machine).
Here's my take, combining wget and Net::FTP on the commandline.
wget -O - http://website.com/hugefile.zip | perl -MNet::FTP -e 'my $ftp = Net::FTP->new("ftp.example.com"); $ftp->login("user", "pass"); $ftp->put(\*STDIN, "hugefile.zip");'
Of course, you can put it in a file (ftpupload.pl) as well and run it.
#!/usr/bin/perl
use strict;
use warnings;
use Net::FTP;
my $ftp = Net::FTP->new("ftp.example.com"); # connect to FTP server
$ftp->login("user", "pass"); # login with your credentials
# Because of the pipe we get the file content on STDIN
# Net::FTP's put is able to handle a pipe as well as a filehandle
$ftp->put(\*STDIN, "hugefile.zip");
Run it like this:
wget -O - http://website.com/hugefile.zip | perl ftpupload.pl
There's - of course - a CPAN module which makes life easy for FTP:
http://search.cpan.org/search?mode=module&query=Net%3A%3AFTP
And WWW::Mechanize looks up files, follows links, etc.
With these modules I think you can solve your problem.
You can try to use wput. It is not very known tool, but i think you can use it.
Use wget's output-document option
wget -O /dev/null http://foo.com/file.uuu
From wget's manual page:
"-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be
concatenated together and written to file. If - is used as file, documents will be
printed to standard output, disabling link conversion. (Use ./- to print to a file
literally named -.)
Use of -O is not intended to mean simply "use the name file instead of the
one in the URL;" rather, it is analogous to shell redirection: wget -O file http://foo is
intended to work like wget -O - http://foo > file; file will be truncated immediately,
and all downloaded content will be written there."
However, I can't see what could be the purpose of that
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
I use the below command to send emails in an Ubuntu server. This seems to attach the testreport.csv file with its full path as the filename.
echo "This email is a test email" | mailx -s 'Test subject' testemail#gmail.com -A "/home/dxduser/reports/testreport.csv"
How can I stop this from happening? Is it possible to attach the file with its actual name? In this case "testreport.csv"?
I use mailx (GNU Mailutils) 3.7 version on Ubuntu 20.04
EDIT: Could someone please explain why I got downvoted for this question?
There are multiple different mailx implementations around, so what exactly works will depend on the version you have installed.
However, as a quick and dirty workaround, you can temporarily cd into that directory (provided you have execute access to it):
( cd /home/dxduser/reports
echo "This email is a test email" |
mailx -s 'Test subject' testemail#gmail.com -A testreport.csv
)
The parentheses run the command in a subshell so that the effect of the cd will only affect that subshell, and the rest of your program can proceed as before.
I would regard it as a bug if your mailx implementation puts in a Content-Disposition: with the full path of the file in the filename.
An alternative approach would be to use a different client. If you can't install e.g. mutt, creating a simple shell script wrapper to build the MIME structure around a base64 or quoted-printable encoding of your CSV file is not particularly hard, but you have to know what you are doing. In very brief,
( cat <<\:
Subject: test email
Content-type: text/csv
Content-disposition: attachment; filename="testreport.csv"
From: me <myself#example.org>
To: you <recipient#example.net>
Content-transfer-encoding: base64
:
base64 </home/dxduser/reports/testreport.csv
) | /usr/lib/sendmail -oi -t
where obviously you have to have base64 and sendmail installed, and probably tweak the path to sendmail (or just omit it if it's in your PATH).
I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
im trying to install openvpn on my centos 6 box
im using epel repository to install the vpn
all went fine on the installation but somehow when coming to generate certificate part
lots of command not found raised when im typing "source ./vars" command
here is returned from the terminal
[root#... easy-rsa]# source ./vars
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
NOTE: If you run ./clean-all, I will be doing a rm -rf on /etc/openvpn/easy-rsa/keys
: command not found
: command not found
: command not found
: command not found
: command not found
here is my vars file setting
# easy-rsa parameter settings
# NOTE: If you installed from an RPM,
# don't edit this file in place in
# /usr/share/openvpn/easy-rsa --
# instead, you should copy the whole
# easy-rsa directory to another location
# (such as /etc/openvpn) so that your
# edits will not be wiped out by a future
# OpenVPN package upgrade.
# This variable should point to
# the top level of the easy-rsa
# tree.
export EASY_RSA="`pwd`"
#
# This variable should point to
# the requested executables
#
export OPENSSL="openssl"
export PKCS11TOOL="pkcs11-tool"
export GREP="grep"
# This variable should point to
# the openssl.cnf file included
# with easy-rsa.
export KEY_CONFIG=/etc/openvpn/easy-rsa/openssl.cnf
# Edit this variable to point to
# your soon-to-be-created key
# directory.
#
# WARNING: clean-all will do
# a rm -rf on this directory
# so make sure you define
# it correctly!
export KEY_DIR="$EASY_RSA/keys"
# Issue rm -rf warning
echo NOTE: If you run ./clean-all, I will be doing a rm -rf on $KEY_DIR
# PKCS11 fixes
export PKCS11_MODULE_PATH="dummy"
export PKCS11_PIN="dummy"
# Increase this to 2048 if you
# are paranoid. This will slow
# down TLS negotiation performance
# as well as the one-time DH parms
# generation process.
export KEY_SIZE=1024
# In how many days should the root CA key expire?
export CA_EXPIRE=3650
# In how many days should certificates expire?
export KEY_EXPIRE=3650
# These are the default values for fields
# which will be placed in the certificate.
# Don't leave any of these fields blank.
export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="SanFrancisco"
export KEY_ORG="Fort-Funston"
export KEY_EMAIL="me#myhost.mydomain"
export KEY_EMAIL=mail#host.domain
export KEY_CN=changeme
export KEY_NAME=changeme
export KEY_OU=changeme
export PKCS11_MODULE_PATH=changeme
export PKCS11_PIN=1234
any help will be appreciated
thanks
If you notice there are 7 command not found statements before the echo. There are also 7 "empty" lines before the echo. It appears that the key dir variable is properly expanding in the echo statement. After the echo statement there are 5 "empty" lines and 5 more command not found errors. This makes me think that the command not found statements are a result of the "empty" lines.
Obviously if the line is empty it shouldn't cause that sort of an error. How did the "vars" file get there? Did you copy/paste it and invisible characters got copied in the process? Or perhaps it was edited on a device that used a different type of carriage returns?
You should use an editor such as vim on it which will help you see normally hidden characters. You could also try to use a program like tofrodos to convert the carriage returns. When you source a file you are actually executing a script and any variables exported become part of our shell that sourced it in. Normally scripts would conform to unix line endings.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So I was just archiving an assignment for email submission, and was asked by the instructor to do so using the tar command and create a .tgz file, which I did with the following command line script:
tar -cvf filename.tgz {main.cpp other filenames here}
No problems on the archive or anything, but when I went to email the file, gmail prevented me saying that my file contained an executable (I'm assuming main.cpp?), and that this was not allowed for security reasons.
So, I ran the same script, but this time created a .tar file instead, like so:
tar -cvf filename.tar {main.cpp filenames here}
Again, archives just fine, but now gmail is fine with me emailing the archive. So what is the difference? I've only really used tar for this purpose, so I'm not really familiar with what the different extensions are utilized for. Obviously, I've figured out a way to get the functionality I need, but like all tinkerers, I'm curious.
What say you?
Absolutely no difference. A filename is just a filename. Usually, when you use the tgz form, it's to indicate that you've gzipped the tar file (either as a second step or using the z flag):
tar zcvf filename.tgz {filenames}
or
tar cvf filename {filenames}
gzip -S .tgz filename
.tar, on the other hand, normally means "this is an uncompressed tar file":
tar cvf filename.tar {filenames}
Most modern tar implementations also support the j flag to use bzip2 compression, so you might also use:
tar jcvf filename.tar.bz2 {filenames}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Renaming lots of files in Linux according to a pattern
I have multiple files in this format:
file_1.pdf
file_2.pdf
...
file_100.pdf
My question is how can I rename all files, that look like this:
file_001.pdf
file_002.pdf
...
file_100.pdf
I know you can rename multiple files with 'rename', but I don't know how to do this in this case.
You can do this using the Perl tool rename from the shell prompt. (There are other tools with the same name which may or may not be able to do this, so be careful.)
rename 's/(\d+)/sprintf("%03d", $1)/e' *.pdf
If you want to do a dry run to make sure you don't clobber any files, add the -n switch to the command.
note
If you run the following command (linux)
$ file $(readlink -f $(type -p rename))
and you have a result like
.../rename: Perl script, ASCII text executable
then this seems to be the right tool =)
This seems to be the default rename command on Ubuntu.
To make it the default on Debian and derivative like Ubuntu :
sudo update-alternatives --set rename /path/to/rename
Explanations
s/// is the base substitution expression : s/to_replace/replaced/, check perldoc perlre
(\d+) capture with () at least one integer : \d or more : + in $1
sprintf("%03d", $1) sprintf is like printf, but not used to print but to format a string with the same syntax. %03d is for zero padding, and $1 is the captured string. Check perldoc -f sprintf
the later perl's function is permited because of the e modifier at the end of the expression
If you want to do it with pure bash:
for f in file_*.pdf; do x="${f##*_}"; echo mv "$f" "${f%_*}$(printf '_%03d.pdf' "${x%.pdf}")"; done
(note the debugging echo)