uploading files in email without restriction - email

How can I upload a file blocked by email providers?
I want to upload certain programs in my email. but, email provider is blocking it. Any way to workaround this problem?

In linux:
apt install sharutils
uuencode input_filename /dev/stdout > output_filename
use pico to remove following first line from <output_filename>
begin 644 /dev/stdout
upload in email
when downloaded
use pico to add following line to top of <encoded_filename>
begin 644 /dev/stdout
uudecode encoded_filename > decoded_filename

Related

Raspberry PI - Send mail from command line using GMAIL smtp server

How can I send email from Raspberry Pi using my gmail account?
I would like to send mail from command line and use this method in my scripts.
Envirenment:
Hardware: Raspberry PI 3
OS: Jessie
SMTP: smtp.gmail.com
I use this method on my Raspberry Pi 3 devices:
Google account setting
Login to your gmail account
Go to: Settings -> Accounts and Import -> Other Google Account settings
Go to: Personal info & privacy -> Account overview
Go to: Sign-in & security -> Connect apps & sites
Set option Allow less secure apps to ON
Install SSMTP
sudo apt-get install ssmtp
Save original conf file
sudo mv /etc/ssmtp/ssmtp.conf /etc/ssmtp/ssmtp.conf.bak
Create new conf file (with vi, or some other text editor)
sudo vi /etc/ssmtp/ssmtp.conf
file content
root=your_account#gmail.com
mailhub=smtp.gmail.com:587
FromLineOverride=YES
AuthUser=your_account#gmail.com
AuthPass=your_password
UseSTARTTLS=YES
UseTLS=YES
# Debug=Yes
Secure conf file
sudo groupadd ssmtp
sudo chown :ssmtp /etc/ssmtp/ssmtp.conf
If you have error on this step like ''cannot access'' ... you must find ssmtp file and use that path: sudo find / -name "ssmtp"
sudo chown :ssmtp /usr/sbin/ssmtp
sudo chmod 640 /etc/ssmtp/ssmtp.conf
sudo chmod g+s /usr/sbin/ssmtp
Sending mail from (only one) command line
echo "This is a test" | ssmtp recipient.address#some_domain.com
or
printf "To: recipient.address#some_domain.com\nFrom: RaspberryPi3\nSubject: Testing send mail from Raspberry\n\nThis is test. Best Regards!\n" | ssmtp -t
Sending mail from file test.txt
Make file with similar content:
To: recipient.address#some_domain.com
From: your_account#gmail.com
Subject: Testing send mail from Raspberry
This is test mail (body)
Best Regards!
Now you can send mail from file
ssmtp recipient.address#some_domain.com < test.txt
That's all :)

visudo nopasswd is not working for wget to execute shell_exec

I have added the below line in /etc/ visudo in the bottom before (Defaults passprompt="Password:" this line).
ALL=(root) NOPASSWD:/usr/bin/wget
When ever i try to with wget command, it is asking passowrd. I can download files after i use sudo command only. Can anyone help to download files without sudo command as I need to use the command in shell_exec PHP file.
The following works to give all users the right to execure /usr/bin/wget
ALL ALL=/usr/bin/wget, NOPASSWD: ALL
you can then use
sudo wget google.com
Without being asked for a password

centos - plesk - apache file owner

I have a question.
Server: VPS
System: Centos 6 + Plesk 11
save_mode = off;
Problem:
I have a script that creates folders for users.
mkdir('/var/www/vhosts/website.com/private/'.$user_id.', 0755, true);
And true the Plesk API i create a ftp user for the new folder.
The problem is that my php script create the new whit the following group and user: apache(502)/503
The ftp users has no rights in this folder at all.
If i create folders true ftp the group and user are: 505/10000
It is because your PHP script is running in mod_php mode and executes under Apache user. The easiest solution would be to switch your site to run in FastCGI mode, so that PHP script is running under your PHP user and there is no ownership conflict.
The question is pretty old, but I found a solution so thought it might be helpful for someone.
Following commands needs to be executed using root access.
cd /var/www/vhosts/yourdomain.com
chown -R youruser:psacln httpdocs
chmod -R g+w httpdocs/wp-content
find httpdocs -type d -exec chmod g+s {} \;
For details explanation you can view the link
http://www.ryanbelanger.com/wordpress-file-permissions/

Sending mail via shell script

I want to send a zip file via email on shell script. I know I need to use the following command:
cat file.txt | mail -s "This is subject" [email]name#address.com[/email]
I always get this error:
line 13: mail: command not found
Does anyone know where and how I can install the mail command?
I am using Cygwin to test my shell script.
If you are in ubuntu, you can use apt
sudo apt-get install mailutils postfix
Install the "nail" or "mailx" package.

Why does wget only download the index.html for some websites?

I'm trying to use wget command:
wget -p http://www.example.com
to fetch all the files on the main page. For some websites it works but in most of the cases, it only download the index.html. I've tried the wget -r command but it doesn't work. Any one knows how to fetch all the files on a page, or just give me a list of files and corresponding urls on the page?
Wget is also able to download an entire website. But because this can put a heavy load upon the server, wget will obey the robots.txt file.
wget -r -p http://www.example.com
The -p parameter tells wget to include all files, including images. This will mean that all of the HTML files will look how they should do.
So what if you don't want wget to obey by the robots.txt file? You can simply add -e robots=off to the command like this:
wget -r -p -e robots=off http://www.example.com
As many sites will not let you download the entire site, they will check your browsers identity. To get around this, use -U mozilla as I explained above.
wget -r -p -e robots=off -U mozilla http://www.example.com
A lot of the website owners will not like the fact that you are downloading their entire site. If the server sees that you are downloading a large amount of files, it may automatically add you to it's black list. The way around this is to wait a few seconds after every download. The way to do this using wget is by including --wait=X (where X is the amount of seconds.)
you can also use the parameter: --random-wait to let wget chose a random number of seconds to wait. To include this into the command:
wget --random-wait -r -p -e robots=off -U mozilla http://www.example.com
Firstly, to clarify the question, the aim is to download index.html plus all the requisite parts of that page (images, etc). The -p option is equivalent to --page-requisites.
The reason the page requisites are not always downloaded is that they are often hosted on a different domain from the original page (a CDN, for example). By default, wget refuses to visit other hosts, so you need to enable host spanning with the --span-hosts option.
wget --page-requisites --span-hosts 'http://www.amazon.com/'
If you need to be able to load index.html and have all the page requisites load from the local version, you'll need to add the --convert-links option, so that URLs in img src attributes (for example) are rewritten to relative URLs pointing to the local versions.
Optionally, you might also want to save all the files under a single "host" directory by adding the --no-host-directories option, or save all the files in a single, flat directory by adding the --no-directories option.
Using --no-directories will result in lots of files being downloaded to the current directory, so you probably want to specify a folder name for the output files, using --directory-prefix.
wget --page-requisites --span-hosts --convert-links --no-directories --directory-prefix=output 'http://www.amazon.com/'
The link you have provided is the homepage or /index.html, Therefore it's clear that you are getting only a index.html page. For an actual download, for example, for "test.zip" file, you need to add the exact file name at the end. For example use the following link to download test.zip file:
wget -p domainname.com/test.zip
Download a Full Website Using wget --mirror
Following is the command line which you want to execute when you want to download a full website and made available for local viewing.
wget --mirror -p --convert-links -P ./LOCAL-DIR
http://www.example.com
–mirror: turn on options suitable for mirroring.
-p: download all files that are necessary to properly display a given HTML page.
–convert-links: after the download, convert the links in document
for local viewing.
-P ./LOCAL-DIR: save all the files and directories to the specified directory
Download Only Certain File Types Using wget -r -A
You can use this under following situations:
Download all images from a website,
Download all videos from a website,
Download all PDF files from a website
wget -r -A.pdf http://example.com/test.pdf
Another problem might be that the site you're mirroring uses links without www. So if you specify
wget -p -r http://www.example.com
it won't download any linked (intern) pages because they are from a "different" domain. If this is the case then use
wget -p -r http://example.com
instead (without www).
I had the same problem downloading files of CFSv2 model. I solved it using mixing of the above answers, but adding the parameter --no-check-certificate
wget -nH --cut-dirs=2 -p -e robots=off --random-wait -c -r -l 1 -A "flxf*.grb2" -U Mozilla --no-check-certificate https://nomads.ncdc.noaa.gov/modeldata/cfsv2_forecast_6-hourly_9mon_flxf/2018/201801/20180101/2018010100/
Here a brief explanation of every parameter used, for a further explanation go to the GNU wget 1.2 Manual
-nH equivalent to --no-host-directories: Disable generation of host-prefixed directories. In this case, avoid the generation of the directory ./https://nomads.ncdc.noaa.gov/
--cut-dirs=<number>: Ignore directory components. In this case, avoid the generation of the directories ./modeldata/cfsv2_forecast_6-hourly_9mon_flxf/
-p equivalent to --page-requisites: This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.
-e robots=off: avoid download robots.txt file
-random-wait: Causes the time between the request to vary between 0.5 and 1.5 * seconds, where was specified using the --wait option.
-c equivalent to --continue: continue getting a partially-downloaded file.
-r equivalent to --recursive: Turn on recursive retrieving. The default maximum depth is 5
-l <depth> equivalent to --level <depth>: Specify recursion maximum depth level
-A <acclist> equivalent to --accept <acclist>: specify a comma-separated list of the name suffixes or patterns to accept.
-U <agent-string> equivalent to --user-agent=<agent-string>: The HTTP protocol allows the clients to identify themselves using a User-Agent header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as ‘Wget/version’, the version being the current version number of Wget.
--no-check-certificate: Don't check the server certificate against the available certificate authorities.
I know that this thread is old, but try what is mentioned by Ritesh with:
--no-cookies
It worked for me!
If you look for index.html in the wget manual you can find an option --default-page=name which is index.html by default. You can change to index.php for example.
--default-page=index.php
If you only get the index.html and that file looks like it only contains binary data (i.e. no readable text, only control characters), then the site is probably sending the data using gzip compression.
You can confirm this by running cat index.html | gunzip to see if it outputs readable HTML.
If this is the case, then wget's recursive feature (-r) won't work. There is a patch for wget to work with gzip compressed data, but it doesn't seem to be in the standard release yet.