Cent OS dropping connections after bad logon tries - centos

I'm new in Cent OS and in IP tables. Problem is that after some few failed authentication tries, server start to drop out connections. Only clues that i have are:
iptables -v -L -n | grep 'LOGDROPOUT'
0 0 LOGDROPOUT all -- * !lo 0.0.0.0/0 10.0.0.23
10.0.0.23 is my IP there are other listed.
cat /etc/sysconfig/iptables-config | grep 'LOGDROPOUT'
:LOGDROPOUT - [0:0]
-A OUTPUT ! -o lo -j LOGDROPOUT
-A LOGDROPOUT -p tcp -m tcp --tcp-flags FIN,RST,ACK SYN -m limit --limit 30/min -j LOG --log-prefix "Firewall: *TCP_OUT Blocked*" --log-uid
-A LOGDROPOUT -p udp -m limit --limit 30/min -j LOG --log-prefix "Firewall: *UDP_OUT Blocked*" --log-uid
-A LOGDROPOUT -p icmp -m limit --limit 30/min -j LOG --log-prefix "Firewall: *ICMP_OUT Blocked*" --log-uid
I did type it because I can connect just from a VM console .
My knowledge of Cent OS and iptables are limited, but no one touch iptables lines. I don't know where to look to find blocking rule.

Related

db_nmap metasploit using hosts in postgres database

I am using metasploit and attempting to run a db_nmap against all the hosts I imported from an nmap run that I saved into a .xml file. So all the hosts are in my metasploit postgres database as verified when I run the hosts command. However I am unsure how I can run db_nmap against all these hosts.
The typical command I use for a single IP is:
db_nmap -sS -Pn -A --script vuln 192.0.0.1
The command I tried to use for all IPs in my database:
db_nmap -sS -Pn -A --script vuln hosts
I also tried
db_nmap -sS -Pn -A --script vuln hosts -c
I am also currently running this as a hackaround but so far it hasn't outputted anything: db_nmap -sS -Pn -A --script vuln -i /home/myuser/targets.txt
I cannot find the documentation I need so I am hoping someone can help me out here.
Thank you!
Try this:
host -o hostcsv
cat hostcsv | awk -F"," '{print $1}' | tr -d '"' | sort -u > host.txt
db_nmap --iL host.txt

Kubectl port forward reliably in a shell script

I am using kubectl port-forward in a shell script but I find it is not reliable, or doesn't come up in time:
kubectl port-forward ${VOLT_NODE} ${VOLT_CLUSTER_ADMIN_PORT}:${VOLT_CLUSTER_ADMIN_PORT} -n ${NAMESPACE} &
if [ $? -ne 0 ]; then
echo "Unable to start port forwarding to node ${VOLT_NODE} on port ${VOLT_CLUSTER_ADMIN_PORT}"
exit 1
fi
PORT_FORWARD_PID=$!
sleep 10
Often after I sleep for 10 seconds, the port isn't open or forwarding hasn't happened. Is there any way to wait for this to be ready. Something like kubectl wait would be ideal, but open to shell options also.
I took #AkinOzer's comment and turned it into this example where I port-forward a postgresql database's port so I can make a pg_dump of the database:
#!/bin/bash
set -e
localport=54320
typename=service/pvm-devel-kcpostgresql
remoteport=5432
# This would show that the port is closed
# nmap -sT -p $localport localhost || true
kubectl port-forward $typename $localport:$remoteport > /dev/null 2>&1 &
pid=$!
# echo pid: $pid
# kill the port-forward regardless of how this script exits
trap '{
# echo killing $pid
kill $pid
}' EXIT
# wait for $localport to become available
while ! nc -vz localhost $localport > /dev/null 2>&1 ; do
# echo sleeping
sleep 0.1
done
# This would show that the port is open
# nmap -sT -p $localport localhost
# Actually use that port for something useful - here making a backup of the
# keycloak database
PGPASSWORD=keycloak pg_dump --host=localhost --port=54320 --username=keycloak -Fc --file keycloak.dump keycloak
# the 'trap ... EXIT' above will take care of kill $pid

docker varnish cmd error - no such file or directory

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!
I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

How do I get pcp to automatically attach nodes to postgres pgpool?

I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done

tshark doesn't always print source ip

How can i get the tcp payload of packets with tshark, and also get the source IP that sent these packets?
This command works for most packets, but some packets are still printed WITHOUT a source IP (Why?) :
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src
*To test my command, run it and then browse to http://stackoverflow.com. Notice that usually the data chunks ("47:45:54:20:2f:61:64:73:...") have an IP after them, but not always.
I found the problem:
The packets with a missing source IP were IPv6, but my original command only prints IPv4.
This works:
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src -e ipv6.src