I'm finding out the proper MTU (Maximum Transfer Unit) size of a mail server using this guide. However, the server blocks pings. Is there any other ways I can determine the MTU size without admin access to the mail server?
You are trying to do path MTU discovery.
As long as no firewalls on the way block all ICMP traffic (because returning "Fragmentation needed" ICMP packeges have to reach you), the method provided will work with IP packages of all kind. You would just have to send IP packages with the 'DF' flag set and watch for returning "Fragmentation needed" ICMP packages.
Here is a example in python using raw sockets.
Related
Before I proceed, I'd like to mention that I did try to research this topic on the internet, but I still need clarification.
Let's say I have two Linux machines connected to a switch (and only to a switch). Machine A has an IP address of 10.0.0.1 and machine B -- 10.0.0.2. I used nmcli command to set the IP address and create an ethernet interface for each machine. Everything works as expected.
Now, the confusing part is how machine A can find machine B and vice versa? I'm using the following command to connect from machine A to machine B:
ssh userB#10.0.0.2
And it works, even if this is the very first data transmission. This surely means that machine A somehow already knew the machine's B MAC address; otherwise, the frame wouldn't find its way to machine B. But how? Since the IP address is meaningless to the switch (Level2), why when I do ping 10.0.0.2 or ssh 10.0.0.2, it still works?
Probably the ARP cache was already populated. Maybe there was a grations ARP broadcast:
Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous ARP to preload the ARP tables of all other local hosts.
If not, most likely an ARP request/reply was happening right before the first ping. Check the arp command or ip neigh.
In general I suggest you use Wireshark to explore what's going on, or something like tcpdump -n -i eth0 not ssh if your are working remotely (note the -n to prevent name resolution). You can also record traffic with tcpdump -s 9999 -w output.pcap and view it later in Wireshark.
If you sniff network traffic on a third PC, keep in mind that switches will not send traffic to all ports when they have learned where the destination is. Some switches allow you to configure a mirror port to observe all traffic to or from a certain port. Either way you should always be able to observe ARP requests as they are broadcast.
basically, when the first packet reach to the switch ( virtual or physical switch ), the switch will populate arp broadcast packet for the sake of getting all devices mac and ip addresses. so even though ip addresses seem meaningless to switches ( cause they're layer 3 concept but switch is for layer 2 ), switches still need those data to process the packets. because this is how we, as human beings, interact with computers for transmitting data by using ip addresses.
when you ping a device, like 10.0.0.2, the switch will search in it's arp table and find the corresponding mac address and also the interface for reaching to the destination.
the best way to comprehend the whole process is to capture the data using wireshark or even implementing a simple topology in softwares like cisco packet tracer.
Once a service is discovered through DNS-SD, how exactly does the address of that host get resolved, and does it take significantly more time/overhead?
Also, if I am using JmDNS or Bonjour there are call-backs for both serviceFound and serviceResolved. If I am just interested in the IP address of the device publishing a certain service, is there a faster/more efficient way of getting the address than going through both serviceFound and serviceResolved?
Thanks
DNS-SD uses Multicast DNS (MDNS) which works by sending DNS packets over UDP to a certain multicast address. All mdns-capable hosts in the network also listen to this address. It uses UDP so, it's quite low overhead. Also, the clients are designed in a way that the amount of chatter on the network is kept to a minimum, by using extensive caching.
Service discovery is a two step process. The first step is finding the names of all hosts providing a certain service (e.g. printing). This will not yet give you the ip address, instead it gives you the mdns name (ending with .local). This is because the ip could possibly change, whereas the name will not.
The second step in service discovery is to resolve the .local name of the host over mdns. You ask via multicast who foo.local is, foo.local will see that packet, and respond via broadcast with its ip address, port number and other information.
I an writing a small application that needs to connect through one of multiple network interfaces on the machine. The interface is not the "default" one (the one with the default route). Is it possible to bind an outbound TCP socket directly to a specific interface?
Here is an example:
eth0: 192.168.1.10, gateway 192.168.1.1
eth1: 192.168.2.10, gateway 192.168.2.1
default gateway: 192.168.1.1
(both interfaces can reach the Internet through different external IPs)
Now, I want my application to use eth1 to connect to an external server, even if the system is configured to use eth0 for external traffic.
(The question is probably trivial, but I just wanted to know if it is possible at all before spending time on it)
Currently, I am using Python with Twisted, but if I have to use BSD sockets then so be it.
From: http://linux.about.com/od/commands/l/blcmdl7_socket.htm
SO_DONTROUTE - Don't send via a gateway, only send to directly connected hosts. The same effect can be achieved by setting the MSG_DONTROUTE flag on a socket send(2) operation. Expects an integer boolean flag.
I am using a SCTP client to send 1000byte data to another SCTP server over a 100m sec delay link. The delay is configured using traffic control(tc) and netem available in Linux
tc qdisc add dev eth0 root netem delay 100ms
The code I have used is from SCTP Multihoming. I have set roundtrip time(max) to 60 seconds and heartbeat to 10sec. Now the issue I am facing is that I can send around 3 to 4 packets of 1000 bytes properly. After that the "Connection reset by peer" happens and because of that I am not able to send any more packets. Can you please let me know what I need to do to send SCTP data over high latency link. Thanks for your help.
Finally I could fix the issue. The issue is caused by a NAT box in between the SCTP client and server. The NAT changes the ip address and during the SCTP heart beat message exchange, since the ip address is different, the client cant find the right ip address and because of this SCTP association fails. So SCTP server sends an ABORT to the client. I removed the NAT and everything went fine.
is it possible to send an echo-request to a host set behind nat
after. all the echo-request doesn't hold a port for the destination host so if there are several hosts using the same external ip address how will the nat be able to forward the echo-request to a specific host
Most modern NAT/packet filtering implementations are stateful. That means they have a wider concept of the word connection than the older stateless variants. That allows them to handle more complex protocols that use additional connections (e.g. FTP), as well as connection-less protocols like ICMP.
In the case of ICMP packets, echo requests contain an ID field that is preserved in the reply. While its 16 bits are somewhat restrictive, it allows in conjuction with the source IP address from the IP header to have a reasonably high confidence on which echo request each reply corresponds to.
EDIT:
As for targeting specific hosts behind a NAT implementation, that is not generally possible. You might be able to:
Redirect all ICMP traffic to one internal host to monitor that one host only.
Use the "pad" data bytes of the echo request packet to provide some kind of host identifier. For example, the -p option of ping on some Linux systems allows setting that field. This is by no means standard, though.
In general, NAT is supposed to hide the hosts behind it from the world, with the exception of any forwarded IP connections.