I have a client whom's disk space has been taken up by mail. Is there away that I can delete all old emails that have been deleted/tml files. I do not want to loose the clients inbox emails.
To see the oldest emails/files you can use the following command:
ls -ltr /home/cpanel_user/mail/example.com/user_email/cur
Have a look at /home/cpanel_user/mail/example.com/user_email/
I think you should look on one of this folders:
new -> unread emails
cur -> e-mails that have already been read
.Trash
.Sent
Also you should use a du -h --max-depth=1 /home/cpanel_user/mail to see what folder use most of disk space.
Related
I want to start by thanking you all for your help ahead of time, as this will help clear up a detail left out on the readthedocs.io guide. What I need is to compress several files into a single gzip, however, the guide shows only how to compress a list of files as individual gzipped file. Again, I appreciate any help as there is very few resources and documentation for this set up. (If there is some extra info, please include links to sources)
After I had set up the grid engine, I ran through the samples in the guide.
Am I right in assuming there is not a script for combining multiple files into one gzip using grid-computing-tools?
Are there any solutions on the Elasticluster Grid Engine setup to compress multiple files into 1 gzip?
What changes can be made to the grid-engine-tools to make it work?
EDIT
The reason we are considering a cluster is that we do expect multiple operations occurring simultaneously, zipped up files per order, which will occur systematically so that a vendor can download a single compressed file per order.
May I state the definition of the problem and you can let me know if I understood it correctly, as both Matt and I provided the exact same solution and somehow it doesn't seem sufficient.
Problem Definition
You have an Order defining the start of a task to process some data.
The processing of data would be split among several compute nodes, each producing a resulting file stored on GS directories.
The goal is:
Collect the files from GS bucket (that were produced by each of the nodes),
Archive the collection of files as one file,
Then compress that archive, and
Push it back to a different GS location.
Let me know if I summarized it properly,
Thanks,
Paul
Are the files in question in Cloud Storage?
Are the files in question on a local or network drive?
In your description, you indicate "What I need is to compress several files into a single gzip". It isn't clear to me that a cluster of computers is needed for this. It sounds more like you just want to use tar along with gzip.
The tar utility will create an archive file it can compress it as well. For example:
$ # Create a directory with a few input files
$ mkdir myfiles
$ echo "This is file1" > myfiles/file1.txt
$ echo "This is file2" > myfiles/file2.txt
$ # (C)reate a compressed archive
$ tar cvfz archive.tgz myfiles/*
a myfiles/file1.txt
a myfiles/file2.txt
$ # (V)erify the archive
$ tar tvfz archive.tgz
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file1.txt
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file2.txt
To extract the contents use:
$ # E(x)tract the archive contents
$ tar xvfz archive.tgz
x myfiles/file1.txt
x myfiles/file2.txt
UPDATE:
In your updated problem description, you have indicated that you may have multiple orders processed simultaneously. If the frequency in which results need to be tar-ed is low, and providing the tar-ed results is not extremely time-sensitive, then you could likely do this with a single node.
However, as the scale of the problem ramps up, you might take a look at using the Pipelines API.
Rather than keeping a fixed cluster running, you could initiate a "pipeline" (in this case a single task) when a customer's order completes.
A call to the Pipelines API would start a VM whose sole purpose is to download the customer's files, tar them up, and push the resulting tar file into Cloud Storage. The Pipelines API infrastructure does the copying from and to Cloud Storage for you. You would effectively just need to supply the tar command line.
There is an example that does something similar here:
https://github.com/googlegenomics/pipelines-api-examples/tree/master/compress
This example will download a list of files and compress each of them independently. It could be easily modified to tar the list of input files.
Take a look at the https://github.com/googlegenomics/pipelines-api-examples github repository for more information and examples.
-Matt
So there are many ways to do it, but the thing is that you cannot directly compress on Google Storage a collection of files - or a directory - into one file, and would need to perform the tar/gzip combination locally before transferring it.
If you want you can have the data compressed automatically via:
gsutil cp -Z
Which is detailed at the following link:
https://cloud.google.com/storage/docs/gsutil/commands/cp#changing-temp-directories
And the nice thing is that you retrieve uncompressed results from compressed data on Google Storage, because it has the ability to perform Decompressive Transcoding:
https://cloud.google.com/storage/docs/transcoding#decompressive_transcoding
You will notice on the last line in the following script:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
The following line will basically copy the current compressed file to Google Cloud Storage:
gcs_util::upload "${WS_OUT_DIR}/*" "${OUTPUT_PATH}/"
What you will need is to first perform the tar/zip on the files in the local scratch directory, and then gsutil copy the compressed file over to Google Storage, but make sure that all the files that need to be compressed are in the scratch directory before starting to compress them. Most likely you would need to SSH copy (scp) them to one of the nodes (i.e. master), and then have the master tar/gzip the whole directory before sending it over to Google Storage. I am assuming each GCE instance has its own scratch disk, but the "gsutil cp" transfer is very fast when working on GCE.
Since Google Storage is fast at data transfers with Google Compute instances, the easiest second option to pursue is to mark out lines 66-69 in the do_compress.sh file:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
This way no compression happens, but the copy happens on the last line via gsutil::upload, in order to have all the uncompressed files transferred to the same Google Storage bucket. Then using "gsutil cp" from the master node you would copy them back locally, in order to compress them locally via tar/gz and then copy the compressed directory file back to the bucket using "gsutil cp".
Hope it helps but it's tricky,
Paul
I need to download all mail messages from a mail account with fetchmail.
When I try with POP3 I can download all mail correctly in this format:
[root#srv root]# ls /home/mail_import/MAIL_USER/new/
1453828024.7837_0.srv
1453828029.7843_0.srv
But pop3 protocol don't allow to choose a folder, so i need to use IMAP.
I cannot download the mails separately when using IMAP. I tried and I have a single file with all mails.
For example:
[root#srv home]# stat /home/mail_import/MAIL_USER/teste
File: ‘/home/mail_import/MAIL_USER/teste’
[root#srv home]# head /home/mail_import/MAIL_USER/teste
From root#SRV Tue Jan 26 18:56:31 2016
Return-path: <root#SRV >
Envelope-to: MAIL_USER#SRV
Delivery-date: Wed, 02 Dec 2015 15:47:00 -0500
I need to download all mails using imap in separate files like the pop3.
My .fetchmailrc is:
set bouncemail
set no spambounce
set softbounce
set properties ""
defaults:
antispam -1
batchlimit 100
poll DOMAIN with proto IMAP
user 'USER' there with password 'PASS' is 'MAIL' here
options keep fetchall ssl mda "/usr/bin/procmail -f %F -d %T";
folder INBOX
and my .procmailrc is:
MAILDIR=/home/mail_import/MAIL_ACCOUNT
DEFAULT=$MAILDIR/INBOX
LOGFILE=/var/log/procmail
LOCKFILE=$MAILDIR/.default.lock
VERBOSE=on
:0 fhw
|formail
#
## Any other rules the user wishes to either include with INCLUDERC,
## or hardcode into this file, would go here.
## --------------------------------------------------------------------------
## If we're here, the mail didn't match any other rules, so deliver normally.
:0:
$DEFAULT
## If that fails, report an error and throw the mail away.
EXITCODE=75
:0
/dev/null
There is some correct option to download the e-mail using IMAP separately equal POP3?
I don't see why you are using Procmail here at all. Just run Fetchmail and let it fetch your mail. Specify a destination folder in a suitable format, and go.
Whether or not email messages are separate files is not a feature of the protocol. It is a feature of the delivery program you use; if you choose to deliver to a file (Berkeley mbox format; what you are seeing here, with a From_ line at the beginning of every message) then all messages will be delivered to a single file. If you deliver to a folder (in maildir format, for example, with the new tmp cur subdirectories) you will get the result you are asking for. Just do whatever you did to get your POP3 messages into the maildir folder MAIL_USER, only using imap instead of pop3, and you are all set.
If you specifically want to do this in Procmail, change
DEFAULT=$MAILDIR/INBOX
to
DEFAULT=$MAILDIR/
But the entirety of your .procmailrc seems pointless. Why do you pipe stuff through formail? The actions you have simply duplicate Procmail's default behavior, with a couple of bugs. I think you could simplify both your own understanding and the process by figuring out how to have Fetchmail deliver the messages straight where you want them. (Not entirely sure whether it supports maildir, though; quick googling was inconclusive. Maybe don't specify an mda at all if that's how you made this happen with POP3.)
lost some important data from my server and I know that data was sended via email. I have root access and I need to recover those emails.
I looked into exim logs and I have the email ID, but when I use a command like:
root#server [/var/spool/exim/msglog]# exim -Mvh 1ZfRwk-003bDf-JB
Failed to open input file for 1ZfRwk-003bDf-JB-H: No such file or directory.
Logs looks like:
2015-09-25 08:17:50 1ZfRwk-003bDf-JB <= info#myserver.com U=username P=local S=7453 id=20150925121750.117390002 ...... etc
I am running WHM under centOS
Is possible recover sended mails?
Any help would be appreciated.
No, You can not recover any mail which was send from your server. You can check your 1ZfRwk-003bDf-JB mail logs with the following command so that you can find out full logs of that mail.
grep 1ZfRwk-003bDf-JB /var/log/exim_mainlog
I am looking fopointers on the best approach to process incoming emails to a certain vhost and to call an external script with the email data as parameters - basically to allow email to be sent to a certain "private" email address at a host which then auto inserts something into that sites database. I currently have exim set up as the mail handler.
You have to follow exim single file configurations structure. In routers section write your own custom router that will deliver email to your desired php script. In transport section write your own custom transport that will ensure delivery to the desired script using curl. Just write the following configurations in your /etc/exim.cnf file:
############ROUTERS
runscript:
driver = accept
transport = run_script
unseen
no_expn
no_verify
############TRANSPORT
run_script:
debug_print = "T: run_script for $local_part#$domain"
driver = pipe
command = /home/bin/curl http://my.domain.com/mailTest.php --data-urlencode $original_local_part#$original_domain
Where mailTest.php will be your destined script.
Procmail is a good generic answer. If your needs are very specific, you could hook in your own script directly from your .forward (or Exim's corresponding construct -- can't remember exactly how it differs), but oftentimes, wrapping your own script inside a simple .procmailrc helps you avoid a bunch of iffy details of email delivery, and concentrate on the actual processing.
:0
' ^Subject: secretpassword adduser \/[A-Z]+
| echo "insert $MATCH into users" | mysql -d users
I have setup a catchall router on exim (used as last router):
catchall:
driver = redirect
domains = +local_domains
data = ${lookup{*#$domain}lsearch{/etc/aliases}}
retry_use_local_part
This works perfectly when sending emails locally. However, if I login to my GMail account and send an email to whatever#mydomain.com, then I get an "Unrouteable Address".
Thank you for any hints to solve this issue.
In the system_aliases: section of the config file you already have a section which does the lookup in /etc/aliases.
Replace
data = ${lookup{$local_part}lsearch{/etc/aliases}}
with
data = ${lookup{$local_part}lsearch*#{/etc/aliases}}
and make sure you have *:catchall_username* in /etc/aliases
This works great for a single domain mail server which is already using /etc/aliases
For this router to work, make sure that
mydomain.com is in local_domains
there is an entry for *#mydomain.com in /etc/aliases
MX record for mydomain.com is pointing to the server, where you've
configured this
This is old as heck, but I didn't see a good answer posted and someone else might want to know the answer.
This post is geared towards Debian with in single configuration file mode. It should work on any Linux Exim4 install though. For the purpose of explaining things we’ll use test#example.com which is configured with the hostname mail.example.com. The system will have a real user called test and we want to create an alias for test called alias. So the end result will all email sent to alias#example.com forwarded to test#example.com without having to create the user alias on the system.
First we need to create a place to store all of the alias files:
mkdir /etc/exim/aliases.d
vim /etc/exim/aliases.d/mail.example.com
contents of the alias file for mail.example.com alias:test
vim /etc/exim/exim4.conf.template
Now look for the section system_aliases. Here you’ll see data = ${lookup{$local_part}lsearch{/etc/aliases}} or something similar. Change that to
data = ${lookup{$local_part}lsearch{/etc/exim4/aliases.d/$domain}}
Save the file and restart exim. The alias should now work. To add support for other domains just add more alias files in the aliases.d directory with the correct hostname.
I copied and pasted this from my blog:
0xeb.info