tcpdump expression syntax error in `ip proto tcp` - pcap

I scrutinized the man pages of tcpdump and pcap-filter (which dictates the grammar of tcpdump's expression), but I could not find why my expression is error:
$ sudo tcpdump -i lo 'ip proto tcp'
tcpdump: syntax error
The man page clearly spells out ip proto protocol is valid grammar: https://www.tcpdump.org/manpages/pcap-filter.7.html
Could the issue be a version mismatch?

It looks like tcpdump doesn't understand the tcp alias in that context. You'll need to use the actual IP protocol number:
sudo tcpdump -i lo ip proto 0x6
If you want IPv6 as well (which is what tcpdump -i lo tcp translates to):
sudo tcpdump -i lo ip proto 0x6 or ip6 proto 0x6

Related

how to read pcap file, filter by ip address and port then write data to another file

As part of a lab exercise that I am doing, I have been asked; using tcpdump read the packets from tcpdumpep1.pcap and filter packets from IP address 184.107.41.72 and port 80. Write these packets to a new file
I tried the following, but I'm getting a syntax error:
$ tcpdump -r tcpdumpep1.pcap -w output.txt host 184.107.41.72 port 80
reading from file tcpdumpep1.pcap, link-type EN10MB (Ethernet)
tcpdump: syntax error in filter expression: syntax error
tcpdump takes a filter predicate, meaning it expects a logic expression with a boolean value once executed on a packet.
Here, it returns a syntax error because you're missing a logical and:
tcpdump -r tcpdumpep1.pcap -w output.txt host 184.107.41.72 and port 80

netcat: listen and capture TCP packets

Is it possible to just listen(not create a new connection) for TCP packets on a port which is already in use, i.e. is sending over data from a router to a server.
I am aware that the following starts the listening process on the mentioned port, and saves it in the pcap file:
SERVER SIDE: nc -l -p <port> > file_name.pcap
CLIENT SIDE: sudo tcpdump -s 0 -U -n -i eth0 not host <server_ip> -w file_name.pcap | nc <server_ip> <port>
But this creates a new connection on the given port and captures packet related to it. I want to capture packets on a port which is already being used to send packets.
Netcat doesn't seem to have that capability currently (according to the man page).
When listening netcat typically opens a socket of family AF_INET (network layer, i.e., TCP/UDP) and type SOCK_STREAM (two-way connection). In contrast, to dump packets, tcpdump opens a socket of family AF_PACKET (device driver layer) and type SOCK_RAW (direct access to packets received).
You can observe this with strace to trace syscalls (here, socket and the subsequent bind) and their arguments:
$ sudo strace -e trace=socket,bind nc -l 8888
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
bind(3, {sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
$
$ sudo strace -e trace=socket,bind tcpdump -w tmp.pcap
[...]
socket(PF_PACKET, SOCK_RAW, 768) = 3
bind(3, {sa_family=AF_PACKET, proto=0x03, if2, pkttype=PACKET_HOST, addr(0)={0, }, 20) = 0
[...]
You can dump traffic at the device driver level (like tcpdump) or the network layer by using a socket of type SOCK_RAW. I.e., you could very well retrieve the file sent over netcat by opening a socket of family AF_INET and type SOCK_RAW, as is implemented in this blog post.

TCP dump filter for an ARP H\W address

What TCP dump filter should I use to get an ARP packet with a specific H\W address(src or dst) from a pcap dump file ?
Try this:
"arp and ether host xx:xx:xx:xx:xx:xx"
Refer to the PCAP-FILTER man page for more information.

Infinispan jgroups firewall ports

When using JGroups, with a component such as Infinispan, what ports i need to open in firewall??
My JGroups configration file is:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<UDP
mcast_addr="${test.jgroups.udp.mcast_addr:239.255.0.1}"
mcast_port="${test.jgroups.udp.mcast_port:46655}"
bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"
bind_port="${test.jgroups.bind.port:46656}"
port_range="${test.jgroups.bind.port.range:20}"
tos="8"
ucast_recv_buf_size="25M"
ucast_send_buf_size="1M"
mcast_recv_buf_size="25M"
mcast_send_buf_size="1M"
loopback="true"
max_bundle_size="64k"
ip_ttl="${test.jgroups.udp.ip_ttl:2}"
enable_diagnostics="false"
bundler_type="old"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="3"
thread_pool.max_threads="10"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="4"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="1"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="10000"
internal_thread_pool.rejection_policy="Discard"
/>
<PING timeout="3000" num_initial_members="3"/>
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}" start_port="${test.jgroups.fd.sock.start.port:9780}" port_range="${test.jgroups.fd.sock.port.range:10}" />
<FD_ALL timeout="15000" interval="3000" />
<VERIFY_SUSPECT timeout="1500" bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"/>
<pbcast.NAKACK2
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3
xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<UFC max_credits="2m" min_threshold="0.40"/>
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k" />
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
Now, i have follows ports open in firewall (Chain INPUT (policy ACCEPT))
ACCEPT udp -- anywhere anywhere multiport dports 46655:46676 /* 205 ipv4 */ state NEW
ACCEPT tcp -- anywhere anywhere multiport dports 9780:9790 /* 204 ipv4 */ state NEW
But still after running infinispan embedded cache for few mins i'm getting
2014-11-05 18:21:38.025 DEBUG org.jgroups.protocols.FD_ALL - haven't received a heartbeat from <hostname>-47289 for 15960 ms, adding it to suspect list
it works fine if i turn off iptables
Thanks in advance
How did you set up your iptables rules ? I used (on Fedora 18)
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -p udp --dport 46655:46676 -j ACCEPT
iptables -A OUTPUT -p udp --dport 46655:46676 -j ACCEPT
iptables -A INPUT -p tcp --dport 9780:9790 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 9780:9790 -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP
This works for me, between 2 hosts.
In my environment, i have two blades on same chassis with Ip bonding
(mode 1 configured) when eth0 interface is active on both blades then
everything work perfectly. When eth0 is active on one blade and eth1
is active on other then clustered get created and after a while it
start throwing "haven't received a heartbeat" exception and no process
leaves the cluster even a long period of time (hours)
There's this bug in JGroups can be associated with your problem: Don't receive heartbeat in Nic Teaming configuration after NIC switch
Workaround: switching to TCP.
See also:
JGroups Advanced Concepts - TCP
What's mode 1 again ? Failover I assume ? Not load balancing right ?
Perhaps you need to use iptables -i INTF where INTF is either eth0 or eth1 ? I'm not an expert in iptables, but perhaps you need to define the logical name, e.g. iptables -i bond0 ?
I suggest use wireshark to see which packets are received and/or enable debugging of the DROPs in iptables on both boxes.
This was happening due to switch droping UDP packets because of limits defined on switch for UDP packets...

TCL script cannot configure multicast socket

I'm working with tcl script under ubuntu 12.04, and I'm facing some problem when I try to configure a multicast socket. What I'm trying to do is to forward traffic from some socket to a multicast one, but I don't know why although the multicast socket is created well,apparently; it isn't bound to the multicast group I want to.
This is the script I'm using
#!/bin/sh
# test.tcl \
exec tclsh "$0" ${1+"$#"}
package require udp
set multicastPort "50003"
proc connector {unicastIP multicastIP port {protocol tcp}} {
if { [string equal $protocol "tcp"] } {
socket -server serverTCP -myaddr $unicastIP $port
puts "tcp"
} elseif {[string equal $protocol "udp" ] } {
serverUDP $unicastIP $multicastIP $port
puts "udp"
}
}
proc serverUDP {unicastIP multicastIP port} {
global multicastPort
set socketUDP [udp_open $port]
puts " $unicastIP"
fconfigure $socketUDP -blocking false -translation binary -buffering none -remote [list $unicastIP $port]
#fileevent $socketUDP readable [list gettingData $socketUDP]
set multicastSocket [udp_open $multicastPort]
udp_conf $multicastSocket -ttl 4
fconfigure $multicastSocket -blocking false -translation binary -buffering none -mcastadd $multicastIP -remote [list $multicastIP $port]
fileevent $socketUDP readable [list forwarding $socketUDP $multicastSocket ]
#puts $socketUDP "hello!"
#flush $socketUDP
}
proc forwarding {socketSrc socketDst} {
set data [read -nonewline $socketSrc]
puts "Read data-> $data"
puts -nonewline $socketDst $data
puts "Written data-> [read -nonewline $socketDst]"
}
connector 127.0.0.1 224.0.1.1 50000 udp
vwait forever
However if I run the script and check out the ports in my system, the multicast port is not assigned the proper multicast IP as you can see
~$ netstat -ptnlu
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:50000 0.0.0.0:* 3334/tclsh
udp 0 0 0.0.0.0:50003 0.0.0.0:* 3334/tclsh
Could anyone tell me the reason?
THanks in advance,
Regards!
AFAIK, that is OK. I have a multicast daemon in production using Tcl and its udp package, and netstat and ss tools also show me the socket as listening on the wildcard address.
"The trick" here, I suppose, is that multicasting is one level up the stack: joining a multicast group is not merely opening a socket or an endpoint on the group address but rather sending a very real IGMP "join" message to the local transport segment (Ethernet, in most deployments) and further communicating with the nearby IGMP routers (again, on Ethernet, they're mostly switches).
So, in your case, just fire up tcpdump and see what it dumps when you start your program. A useful call to tcpdump looks something like this:
tcpdump -i eth0 -n 'igmp and host 224.0.1.1'
To observe UDP traffic exchanges use
tcpdump -i eth0 -n 'udp and host 224.0.1.1 and port 50000'