SeLinux blocking mod-evasive (Apache) from running a command as sudo - centos

I am trying to configure mod_evasive in my CENTOS7 server (VPS) to prevent DDOS attacks.
I followed the steps mentioned in the following tutorial. (this link) Although I am not using IPTABLES, I am using firewalld instead.
In mod_evasive, the directive to execute a shell command when an IP is blacklisted is DOSSystemCommand. But when an IP is blocked, the script I want to run which will submit the IP address to firewalld to block, is blocked by SeLinux. According to what I understood after looking at the audit.log is, SeLinux does not allow apache user to run a command as sudo. Even though this is allowed via the sudoers file.
(mod_evasive is working properly, I verified this my running the test script provided by mod_evasive. I also get the email when an IP is blocked)
Following are the specifics;
mod_evasive.conf file (only the DOSSystemCommand directive shown below)
DOSSystemCommand "sudo /var/www/mod_evasive/ban_ip_mod_evasive.sh"
I have also tried the below variation (with different users too, Ex. root user)
DOSSystemCommand "sudo -u apache '/var/www/mod_evasive/ban_ip_mod_evasive.sh %s'"
Sudoers file (only the part I have modified for this purpose [allow apache to run as sudo)
apache ALL=(ALL) NOPASSWD:ALL
Defaults:apache !requiretty
I have allowed apache to run as any user for all scripts, just for testing purposes. Initially I had restricted the command as follows
apache ALL=NOPASSWD: /usr/local/bin/myscripts/ban_ip.sh
Defaults:apache !requiretty
My ban_ip.sh file
#!/bin/sh
# IP that will be blocked, as detected by mod_evasive
IP=$1
sudo firewall-cmd --add-rich-rule 'rule family="ipv4" source address="`$IP" service name="https" reject'
I have verified that the configuration is working by running the following
sudo -u apache '/usr/local/bin/myscripts/ban_ip.sh 111.111.111.111'
The above command successfully adds the rule to firewalld.
This entire setup works when I disable SeLinux.
After I enable SeLinux the following errors appear in the audit.log
type=AVC msg=audit(1593067671.808:1363): avc: denied { setuid } for pid=3332 comm="sudo" capability=7 scontext=system_u:system_r:httpd_sys_script_t:s0 tcontext=system_u:system_r:httpd_sys_script_t:s0 tclass=capability permissive=0
Was caused by:
Missing type enforcement (TE) allow rule.
You can use audit2allow to generate a loadable module to allow this access.
type=AVC msg=audit(1593067671.808:1364): avc: denied { setgid } for pid=3332 comm="sudo" capability=6 scontext=system_u:system_r:httpd_sys_script_t:s0 tcontext=system_u:system_r:httpd_sys_script_t:s0 tclass=capability permissive=0
Was caused by:
Missing type enforcement (TE) allow rule.
You can use audit2allow to generate a loadable module to allow this access.
The following flags are set in SeLinux
httpd_mod_auth_pam
httpd_setrlimit
I am not sure why SeLinux blocks this, therefore I am skeptical to create a custom policy to allow this behavior as audit2allow suggests.
audit2allow output
#============= httpd_sys_script_t ==============
allow httpd_sys_script_t self:capability { setgid setuid };
#============= httpd_t ==============
allow httpd_t systemd_logind_t:dbus send_msg;
I feel mod_evasive would not help in stopping a DDOS if we can not block the IP rather than merely sending a 403 forbidden response to the attacker. This will still consumer the server resources.
How can I solve this and what other alternatives I can implement to mitigate DDOS and enhance my VPS protection?
thanks for your help

Related

Kubernetes Service pinging not working time to time "Temporary fail in name resolution"

I have two separate clusters (Application and DB) in the same namespace. Statefulset for DB cluster and Deployment for Application cluster. For internal communication I have configured a Headless Service. When I ping from a pod in application cluster to the service it works (Works the other way round too - DB pod to service works). But sometimes, for example if I continuously execute ping command for like 3 times, the third time it gives an error - "ping: : Temporary failure in name resolution". Why is this happening?
As far as I know this is usually a name resolution error and shows that your DNS server cannot resolve the domain names into their respective IP addresses. This can present a grave challenge as you will not be able to update, upgrade, or even install any software packages on your Linux system. Here I have listed few reasons
1.Forgot configuring or Wrongly Configured resolv.conf File
The /etc/resolv.conf file is the resolver configuration file in Linux systems. It contains the DNS entries that help your Linux system to resolve domain names into IP addresses.
If this file is not present or is there but you are still having the name resolution error, create one and append the Google public DNS server as nameserver 8.8.8.8
Save the changes and restart the systemd-resolved service as shown.
$ sudo systemctl restart systemd-resolved.service
It’s also prudent to check the status of the resolver and ensure that it is active and running as expected:
$ sudo systemctl status systemd-resolved.service
2. Due to Firewall Restrictions
By some chance if the first solution did not work for you, firewall restrictions could be preventing you from successfully performing DNS queries. Check your firewall and confirm if port 53 (used for DNS – Domain Name Resolution ) and port 43 are open. If the ports are blocked, open them as follows:
For UFW firewall (Ubuntu / Debian and Mint)
To open ports 53 & 43 on the UFW firewall run the commands below:
$ sudo ufw allow 43/tcp
$ sudo ufw reload```
For firewalld (RHEL / CentOS / Fedora)
For Redhat based systems such as CentOS, invoke the commands below:
```$ sudo firewall-cmd --add-port=53/tcp --permanent
$ sudo firewall-cmd --add-port=43/tcp --permanent
$ sudo firewall-cmd --reload
I hope that you now have an idea about the ‘temporary failure in name resolution‘ error. I also found a similar git issue hope that helps
https://github.com/kubernetes/kubernetes/issues/6667

Moving MongoDB dbpath to an AWS EBS device

I'm using CentOS 7 via AWS.
I'd like to store MongoDB data on an attached EBS instead of the default /var/lib path.
However, when I edit /etc/mongod.conf to point to a new dbpath, I'm getting a permission denied error.
Permissions are set correctly to mongod.mongod on the dir.
What gives?
TL;DR - The issue is SELinux, which affects what daemons can access. Run setenforce 0 to temporarily disable.
You're using a flavour of Linux that uses SELinux.
From Wikipedia:
SELinux can potentially control which activities a system allows each
user, process and daemon, with very precise specifications. However,
it is mostly used to confine daemons[citation needed] like database
engines or web servers that have more clearly defined data access and
activity rights. This limits potential harm from a confined daemon
that becomes compromised. Ordinary user-processes often run in the
unconfined domain, not restricted by SELinux but still restricted by
the classic Linux access rights
To fix temporarily:
sudo setenforce 0
This should disable SELinux policies and allow the service to run.
To fix permanently:
Edit /etc/sysconfig/selinux and set this:
SELINUX=disabled
Then reboot.
The service should now start-up fine.
The data dir will also work with Docker, i.e. something like:
docker run --name db -v /mnt/path-to-mounted-ebs:/data/db -p 27017:27017 mongo:latest
Warning: Both solutions DISABLE the security that SELinux provides, which will weaken your overall security. A better solution is to understand how SELinux works, and create a policy on your new data dir that works with mongod. See https://wiki.centos.org/HowTos/SELinux for a more complete tutorial.

How to access RabbitMq publicly

I have installed & setup the Rabbitmq on Centos remote server. Later I created an file "rabbitmq.config" and added the line
[{rabbit, [{loopback_users, []}]}]
and then restarted the rabbitmq server. Again tried to login the rabbitmq management web interface from my local machine using the guest credentials, but getting
login failed
error message.What is the proper way to empty the loopback user settings for Rabbitmq in Centos.
First of all connect to your rabbitmq server machine using ssh client so as to be able to run rabbitmqctl (like puTTY) & get into the sbin directory of rabbit installation
you need to create a user for any vhost on that system (here I use default vhost "/")
$ rabbitmqctl add_user yourName yourPass
Set the permissions for that user for default vhost
$ rabbitmqctl set_permissions -p / yourName ".*" ".*" ".*"
Set the administrator tag for this user (to enable him access the management pluggin)
$ rabbitmqctl set_user_tags yourName administrator
... and you are ready to login to your rabbitmq management gui using yourName and yourPass from any browser by pointing it to http://"*********":15672 where ***** is your server IP
hope it helps...
:-)
There is an example config file, on centos do:
cp /usr/share/doc/rabbitmq-server-3.4.2/rabbitmq.config.example /etc/rabbitmq/rabbitmq.config
Find and remove comments (and comma):
{loopback_users, []}
Then, stop rabbitmq:
rabbitmqctl stop
Now start the server:
service rabbitmq-server start
Now user "guest" can access from anywhere.
Since RabbitMQ 3.3.0 there you can't use default guest/guest credentials except via localhost, (see release notes for 3.3.0 for details).
As a possible solution you can (and probably should) create custom secured user to be used for monitoring, management, etc.
Also you can use proxy setup.
P.S.:
if you enabled loopback_users check that proper config loaded (for running NODENAME), it is well-formed (has valid syntax and ended with .), management plugin activated and started and no firewall blocking rules exists.
P.P.S.:
Check that default user is guest, it exists and has default (guest) password. If you use some library to access to RabbitMQ, check that it has the same defaults as remote (guest:guest) or specify them explicitly.

Bootstrapping issues in Chef

I have setup a basic infrastructure using chef. This includes a local chef server(ubuntu based), workstation and an ubuntu based server(to be used as the node). Please note that the entire infrastructure lies behind the firewall in my office network. And I have made necessary proxy settings for the servers to access the internet.
So here is the problem - When I try to bootstrap the node using -
knife bootstrap <node's ip> --sudo -x <username> -P <password> -N "<name>"
i get the following error
<node's ip> --2014-02-19 10:47:10-- https://www.opscode.com/chef/install.sh
<node's ip> Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
<node's ip>1 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... failed:Connection refused.
<node's ip> bash: line 83: chef-client: command not found
I was not able to find a solution to this. However I came across the knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" setting that can be added to knife.rb . I did this (by entering my office proxy details) and then the connection during bootstrap was successfull and the chef client was downloaded on the node. However this setting only defines the proxy that should be used by the node. So, this led to the http_proxy = "http://username:password#proxyIP:port/" being set in client.rb. But because I have already made all the proxy settings in my server, the chef client failed to launch. So I manually removed the http_proxy and https_proxy settings from client.rb and ran the command chef-client which was then successful.
I have two questions -
1) why did knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" work? because it only defines the proxy that should be used by the node.
2) Also, alll the proxy setting for the node has already been done. I do not want any proxy settings in client.rb. How do I achieve this?
Please help!
When it comes to your client.rb I'd suggest looking into https://github.com/opscode-cookbooks/chef-client
It's a wrapper script for client.rb(s).
Not sure about your knife[:bootstrap_proxy] though. Ideally that cookbook should take care of it. If you are still stumpped you can run chef-client -VV and knife -VV to see exactly what it's doing.

Postgresql COPY command giving Permissions denied error

I am trying to COPY a file into a table in PostgreSQL. The table owner is postgres and the file owner is postgres.
The file is in /tmp.
Still I am getting the error message:
could not open file "/tmp/file" for reading: Permission denied
I don't understand what I am doing wrong as all the posts I've found say that if I have the file in /tmp and owner is postgres then the COPY command should work.
A guess: You are using Fedora, Red Hat Enterprise Linux, CentOS, Scientific Linux, or one of the other distros that enable SELinux by default.
Either and on your particular OS/version the SELinux policies for PostgreSQL do not permit the server to read files outside the PostgreSQL data directory, or the file was created by a service covered by a targeted policy so it has a label that PostgreSQL isn't allowed to read from.
You can confirm whether or not this is the problem by running, as root:
setenforce 0
then re-testing. Run:
setenforce 1
to re-enable SELinux after testing. setenforce isn't permanent; SELinux will be automatically re-enabled on reboot anyway. Disabling SELinux permanently is not usually a good solution for issues like this; if you confirm the issue is SELinux it can be explored further.
Since you have not specified the OS or version you are using, the PostgreSQL version, the exact command you're running, ls -al on the file, \d+ on the table, etc, it's hard to give any more detail, or to know if this is more than a guess. Try updating your answer to include all that and an ls --lcontext of the file too.
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server. (source: postgresql documentation)
So the file should be readable (or writable) by the unix user under which postgresql server is running (i.e not your user!). To be absolutly sure, you can try to run sudo -u postgres head /tmp/test.csv (assuming you are allowed to used sudo and assuming the database user is postgres).
If that fails, it might be an issue related to SELinux (as mentioned by Craig Ringer). Under the most common SELinux policy (the "targeted" reference policy), used by Red Hat/Fedora/CentOS, Scientific Linux, Debian and others... the postgresql server process is confined : it can only read/write a few file types.
The denial might not be logged in auditd's log file (/var/log/audit/audit.log) due to a donaudit rule. So the usual SELinux quick test apply e.g: stop SELinux from confining any process by running getenforce;setengorce 0;getenforce, then test postgresql's COPY. Then re-activate SELinux by running setenforce 1 (this command modify the running state, not the configuration file, so SELinux will be active (Enforcing) after reboot.
The proper way to fix that is to change the SELinux context of the file to load. A quick hack is to run:
chcon -t postgresql_tmp_t /tmp/a.csv
But this file labelling will not survive if hte filesystem is relabel or if you create a new file. You will need to create a directory with an SELinux file context mapping :
which semanage || yum install policycoreutils-python
semanage fcontext -a -t postgresql_tmp_t '/srv/psql_copydir(/.*)?'
mkdir /srv/psql_copydir
chmod 750 /srv/psql_copydir
chgrp postgres /srv/psql_copydir
restorecon -Rv /srv/psql_copydir
ls -Zd /srv/psql_copydir
Any file created in that directory should have the proper file context automatically so postgresql server can read/write it.
(to check the SELinux context under which postgres is running, runps xaZ | grep "postmaste[r]" | grep -o "[a-z_]*_t", which should print postgresql_t. To list the context types to which postgresql_t can write, use sesearch -s postgresql_t -A | grep ': file.*write'. the command sesearch belong to the setools-console RPM package).