Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
I use the below command to send emails in an Ubuntu server. This seems to attach the testreport.csv file with its full path as the filename.
echo "This email is a test email" | mailx -s 'Test subject' testemail#gmail.com -A "/home/dxduser/reports/testreport.csv"
How can I stop this from happening? Is it possible to attach the file with its actual name? In this case "testreport.csv"?
I use mailx (GNU Mailutils) 3.7 version on Ubuntu 20.04
EDIT: Could someone please explain why I got downvoted for this question?
There are multiple different mailx implementations around, so what exactly works will depend on the version you have installed.
However, as a quick and dirty workaround, you can temporarily cd into that directory (provided you have execute access to it):
( cd /home/dxduser/reports
echo "This email is a test email" |
mailx -s 'Test subject' testemail#gmail.com -A testreport.csv
)
The parentheses run the command in a subshell so that the effect of the cd will only affect that subshell, and the rest of your program can proceed as before.
I would regard it as a bug if your mailx implementation puts in a Content-Disposition: with the full path of the file in the filename.
An alternative approach would be to use a different client. If you can't install e.g. mutt, creating a simple shell script wrapper to build the MIME structure around a base64 or quoted-printable encoding of your CSV file is not particularly hard, but you have to know what you are doing. In very brief,
( cat <<\:
Subject: test email
Content-type: text/csv
Content-disposition: attachment; filename="testreport.csv"
From: me <myself#example.org>
To: you <recipient#example.net>
Content-transfer-encoding: base64
:
base64 </home/dxduser/reports/testreport.csv
) | /usr/lib/sendmail -oi -t
where obviously you have to have base64 and sendmail installed, and probably tweak the path to sendmail (or just omit it if it's in your PATH).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have files with name out-of-ASCII on a Linux server to zip. Unfortunately, UTF8-encoded files always have file name corrupted on extracting on Windows.
Is there a way to zip files with name encoded in a specific charset different to the local system charset? Or is there a tool that can extract UTF8-encoded files with correct name on Windows?
(If the solution is a script, PHP or Python are preferred.)
Use 7z or 7zip.
Compress your files on Linux and decompress on Windows both by 7-zip.
For python2 (filenames is russian) pls use cp866
with zipfile.ZipFile(file_handle, mode='w') as zip_file:
for file_ in self._files.all():
path = file_.file.path
filename = u'Название файла.txt'
try:
filename = filename_utf.encode('cp866')
except:
ext = str(path.split('.')[-1])
filename = '%s.%s' % (uuid4().hex, ext)
zip_file.write(path, filename)
For python 3
file_handle = BytesIO()
with zipfile.ZipFile(file_handle, mode='w') as zip_file:
for file_obj in files:
zip_file.write(filename=file_obj.full_path, arcname=file_obj.file_name)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So I was just archiving an assignment for email submission, and was asked by the instructor to do so using the tar command and create a .tgz file, which I did with the following command line script:
tar -cvf filename.tgz {main.cpp other filenames here}
No problems on the archive or anything, but when I went to email the file, gmail prevented me saying that my file contained an executable (I'm assuming main.cpp?), and that this was not allowed for security reasons.
So, I ran the same script, but this time created a .tar file instead, like so:
tar -cvf filename.tar {main.cpp filenames here}
Again, archives just fine, but now gmail is fine with me emailing the archive. So what is the difference? I've only really used tar for this purpose, so I'm not really familiar with what the different extensions are utilized for. Obviously, I've figured out a way to get the functionality I need, but like all tinkerers, I'm curious.
What say you?
Absolutely no difference. A filename is just a filename. Usually, when you use the tgz form, it's to indicate that you've gzipped the tar file (either as a second step or using the z flag):
tar zcvf filename.tgz {filenames}
or
tar cvf filename {filenames}
gzip -S .tgz filename
.tar, on the other hand, normally means "this is an uncompressed tar file":
tar cvf filename.tar {filenames}
Most modern tar implementations also support the j flag to use bzip2 compression, so you might also use:
tar jcvf filename.tar.bz2 {filenames}
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to download a big file from a normal http-link to an ftp-server (under ubuntu) without storing the file locally (as my local storage is too small).
Do you have any ideas how to do this with wget or a small perl-script? (I don't have sudo-rights on the local machine).
Here's my take, combining wget and Net::FTP on the commandline.
wget -O - http://website.com/hugefile.zip | perl -MNet::FTP -e 'my $ftp = Net::FTP->new("ftp.example.com"); $ftp->login("user", "pass"); $ftp->put(\*STDIN, "hugefile.zip");'
Of course, you can put it in a file (ftpupload.pl) as well and run it.
#!/usr/bin/perl
use strict;
use warnings;
use Net::FTP;
my $ftp = Net::FTP->new("ftp.example.com"); # connect to FTP server
$ftp->login("user", "pass"); # login with your credentials
# Because of the pipe we get the file content on STDIN
# Net::FTP's put is able to handle a pipe as well as a filehandle
$ftp->put(\*STDIN, "hugefile.zip");
Run it like this:
wget -O - http://website.com/hugefile.zip | perl ftpupload.pl
There's - of course - a CPAN module which makes life easy for FTP:
http://search.cpan.org/search?mode=module&query=Net%3A%3AFTP
And WWW::Mechanize looks up files, follows links, etc.
With these modules I think you can solve your problem.
You can try to use wput. It is not very known tool, but i think you can use it.
Use wget's output-document option
wget -O /dev/null http://foo.com/file.uuu
From wget's manual page:
"-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be
concatenated together and written to file. If - is used as file, documents will be
printed to standard output, disabling link conversion. (Use ./- to print to a file
literally named -.)
Use of -O is not intended to mean simply "use the name file instead of the
one in the URL;" rather, it is analogous to shell redirection: wget -O file http://foo is
intended to work like wget -O - http://foo > file; file will be truncated immediately,
and all downloaded content will be written there."
However, I can't see what could be the purpose of that
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have setup some cron jobs and they send the crons result to an email. Now over the months I have accumulated a huge number of emails.
Now my question is how can I purge all those emails from my mailbox?
alternative way:
mail -N
d *
quit
-N Inhibits the initial display of message headers when reading mail or editing a mail folder.
d * delete all mails
You can simply delete the /var/mail/username file to delete all emails for a specific user. Also, emails that are outgoing but have not yet been sent will be stored in /var/spool/mqueue.
Just use:
mail
d 1-15
quit
Which will delete all messages between number 1 and 15. to delete all, use the d *.
I just used this myself on ubuntu 12.04.4, and it worked like a charm.
For example:
eric#dev ~ $ mail
Heirloom Mail version 12.4 7/29/08. Type ? for help.
"/var/spool/mail/eric": 2 messages 2 new
>N 1 Cron Daemon Tue Jul 29 17:43 23/1016 "Cron <eric#ip-10-0-1-51> /usr/bin/php /var/www/sandbox/eric/c"
N 2 Cron Daemon Tue Jul 29 17:44 23/1016 "Cron <eric#ip-10-0-1-51> /usr/bin/php /var/www/sandbox/eric/c"
& d *
& quit
Then check your mail again:
eric#dev ~ $ mail
No mail for eric
eric#dev ~ $
What is tripping you up is you are using x or exit to quit which rolls back the changes during that session.
One liner:
echo 'd *' | mail -N
Rather than deleting, I think we can nullify the file, because the file will be created if the mail service is still on.
Something like following will do the job
cat /dev/null >/var/spool/mail/tomlinuxusr
And yes, sorry for awakening this old thread but I felt I could contribute.
On UNIX / Linux / Mac OS X you can copy and override files, can't you? So how about this solution:
cp /dev/null /var/mail/root
If you're using cyrus/sasl/imap on your mailserver, then one fast and efficient way to purge everything in a mailbox that is older then number of days specified is to use cyrus/imap ipurge command. For example, here is an example removing everything (be carefull!!), older then 30 days from user vleo. Notice, that you must be logged in as cyrus (imap mail administrator) user:
[cyrus#mailserver ~]$ /usr/lib/cyrus-imapd/ipurge -f -d 30 user.vleo
Working on user.vleo...
total messages 4
total bytes 113183
Deleted messages 0
Deleted bytes 0
Remaining messages 4
Remaining bytes 113183
Rather than use "d", why not "p". I am not sure if the "p *" will work. I didn't try that. You can; however use the following script"
#!/bin/bash
#
MAIL_INDEX=$(printf 'h a\nq\n' | mail | egrep -o '[0-9]* unread' | awk '{print $1}')
markAllRead=
for (( i=1; i<=$MAIL_INDEX; i++ ))
do
markAllRead=$markAllRead"p $i\n"
done
markAllRead=$markAllRead"q\n"
printf "$markAllRead" | mail
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This question is based on the answer.
I run at home
find -- ./ Desktop
I understand the command as
find without parameters
at the current directory that is home (= /Users/masi/)
find the folder name Desktop at the current directory
How do you read the command?
The answer to your question in the title is
$ find . -type f
Now, keep in mind that
$ find -- ./ Desktop
will return the files in Desktop twice.
In your example, "--" says to stop looking for further options. Everything else after that is a path, so it finds anything else matching that. And since "./" means "the current directory" it matches everything under the current directory (the Desktop will cause that directory, as well as anything inside it, to be reported twice.)
You probably want something like:
find ./Desktop -type f
Which will find any files inside the ./Desktop directory, that is a file (not directories, symbolic links, etc...)
I know that manpages can be quite technical sometimes, but "man find" will give you a wealth of other options that might help, as well as a few examples that may help with common problems.
I think what you want is:
find ./ -name Desktop
Well, you can pass multiple directories to search to find:
$ find --help
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
[...]
Note the "[path...]" indicating you can specify multiple paths.
So your example will find all files and directories under ./ (current dir) and under Desktop.