Infinispan jgroups firewall ports - jboss

When using JGroups, with a component such as Infinispan, what ports i need to open in firewall??
My JGroups configration file is:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<UDP
mcast_addr="${test.jgroups.udp.mcast_addr:239.255.0.1}"
mcast_port="${test.jgroups.udp.mcast_port:46655}"
bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"
bind_port="${test.jgroups.bind.port:46656}"
port_range="${test.jgroups.bind.port.range:20}"
tos="8"
ucast_recv_buf_size="25M"
ucast_send_buf_size="1M"
mcast_recv_buf_size="25M"
mcast_send_buf_size="1M"
loopback="true"
max_bundle_size="64k"
ip_ttl="${test.jgroups.udp.ip_ttl:2}"
enable_diagnostics="false"
bundler_type="old"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="3"
thread_pool.max_threads="10"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="4"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="1"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="10000"
internal_thread_pool.rejection_policy="Discard"
/>
<PING timeout="3000" num_initial_members="3"/>
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}" start_port="${test.jgroups.fd.sock.start.port:9780}" port_range="${test.jgroups.fd.sock.port.range:10}" />
<FD_ALL timeout="15000" interval="3000" />
<VERIFY_SUSPECT timeout="1500" bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"/>
<pbcast.NAKACK2
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3
xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<UFC max_credits="2m" min_threshold="0.40"/>
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k" />
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
Now, i have follows ports open in firewall (Chain INPUT (policy ACCEPT))
ACCEPT udp -- anywhere anywhere multiport dports 46655:46676 /* 205 ipv4 */ state NEW
ACCEPT tcp -- anywhere anywhere multiport dports 9780:9790 /* 204 ipv4 */ state NEW
But still after running infinispan embedded cache for few mins i'm getting
2014-11-05 18:21:38.025 DEBUG org.jgroups.protocols.FD_ALL - haven't received a heartbeat from <hostname>-47289 for 15960 ms, adding it to suspect list
it works fine if i turn off iptables
Thanks in advance

How did you set up your iptables rules ? I used (on Fedora 18)
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -p udp --dport 46655:46676 -j ACCEPT
iptables -A OUTPUT -p udp --dport 46655:46676 -j ACCEPT
iptables -A INPUT -p tcp --dport 9780:9790 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 9780:9790 -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP
This works for me, between 2 hosts.

In my environment, i have two blades on same chassis with Ip bonding
(mode 1 configured) when eth0 interface is active on both blades then
everything work perfectly. When eth0 is active on one blade and eth1
is active on other then clustered get created and after a while it
start throwing "haven't received a heartbeat" exception and no process
leaves the cluster even a long period of time (hours)
There's this bug in JGroups can be associated with your problem: Don't receive heartbeat in Nic Teaming configuration after NIC switch
Workaround: switching to TCP.
See also:
JGroups Advanced Concepts - TCP

What's mode 1 again ? Failover I assume ? Not load balancing right ?
Perhaps you need to use iptables -i INTF where INTF is either eth0 or eth1 ? I'm not an expert in iptables, but perhaps you need to define the logical name, e.g. iptables -i bond0 ?
I suggest use wireshark to see which packets are received and/or enable debugging of the DROPs in iptables on both boxes.

This was happening due to switch droping UDP packets because of limits defined on switch for UDP packets...

Related

openvswitch refuses to forward packets between interfaces

I have an openvswitch instance that refuses to forward packets between two interfaces.
First of all i disabled other_config dpdk-init. Now there are no other_config values in Open_vSwitch table.
Next I created a new bridge. And only added the PF (physical) interfaces. These interfaces are tagged currently on the same vlan and were configured with netplan.
$ip -br a | grep 3935
enp65s0f1.3935#enp65s0f1 UP fe80::63f:72ff:feea:ca55/64
enp65s0f0.3935#enp65s0f0 UP fe80::63f:72ff:feea:ca54/64
I then removed the dpdk ports from ovs and created a new bridge. Then added the PF interfaces as NON dpdk interfaces. I did not tag these interfaces on OVS, as the interfaces are tagging via netplan. Note: I did try tagging the interfaces as well even though they are already tagged from netplan, did not help.
$ sudo ovs-vsctl add-br br0
$ sudo ovs-vsctl add-port enp65s0f0.3935
$ sudo ovs-vsctl add-port enp65s0f1.3935
$ sudo ovs-vsctl show
e2a3fa00-23c9-4c3d-b9b6-e37df4f00dd7
Bridge br0_dpdk
datapath_type: netdev
Port br0_dpdk
Interface br0_dpdk
type: internal
Bridge br0
Port enp65s0f0.3935
Interface enp65s0f0.3935
Port enp65s0f1.3935
Interface enp65s0f1.3935
Port ovsmi175332
Interface ovsmi175332
Port ovsmi138678
Interface ovsmi138678
Port br0
Interface br0
type: internal
ovs_version: "2.17.2"
Then I started sending traffic. I can now see traffic being RX on the inbound interface (enp65s0f0.3935) from OVS standpoint.
$ sudo ovs-ofctl dump-ports br0 enp65s0f0.3935
OFPST_PORT reply (xid=0x4): 1 ports
port "enp65s0f0.3935": rx pkts=7299742, bytes=435598113, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2826, bytes=3288309, drop=0, errs=0, coll=0
I also was able to use ovs-tcpdump -i enp65s0f0.3935 and I saw traffic. So I know the OVS port is RX traffic.
I checked the forwarding table for OVS and I see MAC addresses coming from the RX interface.
$ sudo ovs-appctl fdb/show br0 | head -n 10
port VLAN MAC Age
22 0 02:1a:c5:03:00:0f 0
22 0 02:1a:c5:03:00:17 0
22 0 02:1a:c5:03:00:15 0
22 0 02:1a:c5:03:00:24 0
22 0 02:1a:c5:03:00:0c 0
22 0 02:1a:c5:03:00:2a 0
22 0 02:1a:c5:03:00:0b 0
22 0 02:1a:c5:03:00:33 0
22 0 02:1a:c5:03:00:40 0
And here is the flow output from both interfaces
$ sudo ovs-appctl ofproto/trace br0 "in_port=22"
Flow: in_port=22,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000
bridge("br0")
-------------
0. priority 0
NORMAL
-> no learned MAC for destination, flooding
Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=22,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000
Datapath actions: 4,1,3,5
$ sudo ovs-appctl ofproto/trace br0 "in_port=23"
Flow: in_port=23,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000
bridge("br0")
-------------
0. priority 0
NORMAL
-> no learned MAC for destination, flooding
Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=23,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000
Datapath actions: 5,1,2,4
However No traffic is being TX from enp65s0f0.3935 to enp65s0f1.3935.
Why is traffic being refuses to be forwarded? What am I missing?
EDIT:
I did another test where I tested two name spaces linked to the OVS using a different bridge. Doing this I am able to ping across the OVS system between the two name spaces with no problems. Problem seems to be with the physical interfaces. Note these are Mellanox interfaces.

Confuse about fail2ban behavior with firewallD in Centos 7

I was using fail2ban/iptables in a Centos 6 server.
I moved to Centos 7 and now I am using fail2ban/firewallD (installed by Webmin/Virtualmin with their defaults)
These are cat /var/log/maillog | grep "disconnect from unknown" screen shots
cat /var/log/fail2ban.log | grep Ban only displays
2019-10-27 16:52:22,975 fail2ban.actions [8792]: NOTICE [proftpd] Ban 111.225.204.32
Furthermore tailf /var/log/fail2ban.log displays several "already banned" of the same IP. In this case fail2ban, after maxretry is reached it tries to ban the IP.
Here are my configurations (partial), I left them as they were by defaults but changed bantimes.
jail.local
[postfix]
enabled = true
port = smtp,465,submission
bantime = -1
[postfix-sasl]
enabled = true
port = smtp,465,submission,imap3,imaps,pop3,pop3s
bantime = -1
[dovecot]
enabled = true
port = pop3,pop3s,imap,imaps,submission,465,sieve
bantime = -1
jail.conf
[DEFAULT]
findtime = 600
maxretry = 5
backend = auto
filter = %(__name__)s
port = 0:65535
banaction = iptables-multiport
banaction_allports = iptables-allports
action_ = %(banaction)s[name=%(__name__)s, bantime="%(bantime)s", port="% > (port)s", protocol="%(protocol)s", chain="%(chain)s"]
action = %(action_)s
jail.d/00-firewalld.conf
[DEFAULT]
banaction = firewallcmd-ipset
These files exist: action.d/firewallcmd-ipset.conf and filter.d/postfix.conf
firewall-cmd --direct --get-all-rules
ipv4 filter INPUT_direct 0 -p tcp -m multiport --dports ssh -m set
--match-set fail2ban-default src -j REJECT --reject-with icmp-port-unreachable
ipv4 filter INPUT 0 -p tcp -m multiport --dports ssh -m set --match-set fail2ban-sshd src -j REJECT --reject-with icmp-port-unreachable
ipv4 filter INPUT 0 -p tcp -m multiport --dports 10000 -m set --match-set fail2ban-webmin-auth src -j REJECT --reject-with icmp-port-unreachable
ipv4 filter INPUT 0 -p tcp -m multiport --dports ssh,sftp -m set --match-set fail2ban-ssh-ddos src -j REJECT --reject-with icmp-port-unreachable
After manually running
firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='193.56.28.0/24' reject"
and
firewall-cmd --reload this output of tailf /var/log/fail2ban.log
stopped.
How can I get all those IPs banned after they reach maxretry value?
Would they be banned forever despite service restart or reload?
Edit 1:
From fail2ban.log with action=firewalld-cmd ipset
From fail2ban.log with action=iptables-allports
Edit 2:
It seems (I guess) something is flushing configurations (I guess it would be Webmin) because after a while I start getting error logs like failed to execute ban jail 'dovecot' action iptables-allports so I am trying this:
in actions.d created banning.conf
[Definition]
actionban = /usr/bin/firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='<IP>' reject"; ; /usr/bin/firewall-cmd --reload
and at jail.local
[DEFAULT]
banaction = iptables-multiport
banning
But I get Error in action definition banning
I know this is not a solution.
Before moving the server I was using fail2ban/iptables (not firewalld) for years not having to pay attention beyond default settings.
How can I get all those IPs banned after they reach maxretry value?
Your issue has probably nothing with maxretry etc.
If you see [jail] Ban 192.0.2.1 and several [jail] 192.0.2.1 already banned messages hereafter (especially after some minutes after the "Ban" message for same Jail/IP), this means only that your banning action (firewalld) does not work at all (after ban, the intruder-IP is still able to repeat its attempts).
In the last time we had certain issues with that (especially with combination firewalld + CentOS) - see for example https://github.com/fail2ban/fail2ban/issues/1609 as well as related firewalld issue - https://github.com/firewalld/firewalld/issues/515.
So check your native net-filter (iptables, etc), if you see some (white-listing established traffic) rules before fail2ban chains, it looks like your configuration is not fail2ban (or whatever banning-system) capable... here may be the answer for you - https://github.com/fail2ban/fail2ban/issues/2503#issuecomment-533105500.
Here is another similar issue with an example excerpt illustrating "wrong iptables rule that bypass fail2ban" - https://github.com/fail2ban/fail2ban/issues/2545#issuecomment-543347684
In this case:
either switch the backend of firewalld (as suggested above);
or switch the banaction of fail2ban to something native (iptables/ipset/etc).
or even add still one action dropping or killing active established connection of the banned IP (using something like tcpkill, killcx, ss etc).
UPDATE 1
jail.local example:
[DEFAULT]
banaction = iptables-multiport
banaction_allports = iptables-allports
[postfix-sasl]
enabled = true
[dovecot]
enabled = true
...
If after fail2ban reload you'd still see some IP making attempts after ban and already banned in fail2ban.log, provide log-excerpt of fail2ban by the first ban or else some possible errors around (because already banned is too late and does not help at all).
If no errors are there, provide output of iptables -nL.

tcpdump expression syntax error in `ip proto tcp`

I scrutinized the man pages of tcpdump and pcap-filter (which dictates the grammar of tcpdump's expression), but I could not find why my expression is error:
$ sudo tcpdump -i lo 'ip proto tcp'
tcpdump: syntax error
The man page clearly spells out ip proto protocol is valid grammar: https://www.tcpdump.org/manpages/pcap-filter.7.html
Could the issue be a version mismatch?
It looks like tcpdump doesn't understand the tcp alias in that context. You'll need to use the actual IP protocol number:
sudo tcpdump -i lo ip proto 0x6
If you want IPv6 as well (which is what tcpdump -i lo tcp translates to):
sudo tcpdump -i lo ip proto 0x6 or ip6 proto 0x6

netcat: listen and capture TCP packets

Is it possible to just listen(not create a new connection) for TCP packets on a port which is already in use, i.e. is sending over data from a router to a server.
I am aware that the following starts the listening process on the mentioned port, and saves it in the pcap file:
SERVER SIDE: nc -l -p <port> > file_name.pcap
CLIENT SIDE: sudo tcpdump -s 0 -U -n -i eth0 not host <server_ip> -w file_name.pcap | nc <server_ip> <port>
But this creates a new connection on the given port and captures packet related to it. I want to capture packets on a port which is already being used to send packets.
Netcat doesn't seem to have that capability currently (according to the man page).
When listening netcat typically opens a socket of family AF_INET (network layer, i.e., TCP/UDP) and type SOCK_STREAM (two-way connection). In contrast, to dump packets, tcpdump opens a socket of family AF_PACKET (device driver layer) and type SOCK_RAW (direct access to packets received).
You can observe this with strace to trace syscalls (here, socket and the subsequent bind) and their arguments:
$ sudo strace -e trace=socket,bind nc -l 8888
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
bind(3, {sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
$
$ sudo strace -e trace=socket,bind tcpdump -w tmp.pcap
[...]
socket(PF_PACKET, SOCK_RAW, 768) = 3
bind(3, {sa_family=AF_PACKET, proto=0x03, if2, pkttype=PACKET_HOST, addr(0)={0, }, 20) = 0
[...]
You can dump traffic at the device driver level (like tcpdump) or the network layer by using a socket of type SOCK_RAW. I.e., you could very well retrieve the file sent over netcat by opening a socket of family AF_INET and type SOCK_RAW, as is implemented in this blog post.

TCL script cannot configure multicast socket

I'm working with tcl script under ubuntu 12.04, and I'm facing some problem when I try to configure a multicast socket. What I'm trying to do is to forward traffic from some socket to a multicast one, but I don't know why although the multicast socket is created well,apparently; it isn't bound to the multicast group I want to.
This is the script I'm using
#!/bin/sh
# test.tcl \
exec tclsh "$0" ${1+"$#"}
package require udp
set multicastPort "50003"
proc connector {unicastIP multicastIP port {protocol tcp}} {
if { [string equal $protocol "tcp"] } {
socket -server serverTCP -myaddr $unicastIP $port
puts "tcp"
} elseif {[string equal $protocol "udp" ] } {
serverUDP $unicastIP $multicastIP $port
puts "udp"
}
}
proc serverUDP {unicastIP multicastIP port} {
global multicastPort
set socketUDP [udp_open $port]
puts " $unicastIP"
fconfigure $socketUDP -blocking false -translation binary -buffering none -remote [list $unicastIP $port]
#fileevent $socketUDP readable [list gettingData $socketUDP]
set multicastSocket [udp_open $multicastPort]
udp_conf $multicastSocket -ttl 4
fconfigure $multicastSocket -blocking false -translation binary -buffering none -mcastadd $multicastIP -remote [list $multicastIP $port]
fileevent $socketUDP readable [list forwarding $socketUDP $multicastSocket ]
#puts $socketUDP "hello!"
#flush $socketUDP
}
proc forwarding {socketSrc socketDst} {
set data [read -nonewline $socketSrc]
puts "Read data-> $data"
puts -nonewline $socketDst $data
puts "Written data-> [read -nonewline $socketDst]"
}
connector 127.0.0.1 224.0.1.1 50000 udp
vwait forever
However if I run the script and check out the ports in my system, the multicast port is not assigned the proper multicast IP as you can see
~$ netstat -ptnlu
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:50000 0.0.0.0:* 3334/tclsh
udp 0 0 0.0.0.0:50003 0.0.0.0:* 3334/tclsh
Could anyone tell me the reason?
THanks in advance,
Regards!
AFAIK, that is OK. I have a multicast daemon in production using Tcl and its udp package, and netstat and ss tools also show me the socket as listening on the wildcard address.
"The trick" here, I suppose, is that multicasting is one level up the stack: joining a multicast group is not merely opening a socket or an endpoint on the group address but rather sending a very real IGMP "join" message to the local transport segment (Ethernet, in most deployments) and further communicating with the nearby IGMP routers (again, on Ethernet, they're mostly switches).
So, in your case, just fire up tcpdump and see what it dumps when you start your program. A useful call to tcpdump looks something like this:
tcpdump -i eth0 -n 'igmp and host 224.0.1.1'
To observe UDP traffic exchanges use
tcpdump -i eth0 -n 'udp and host 224.0.1.1 and port 50000'