How to use Avahi as if it were Bonjour - bonjour

I am using Avahi and Bonjour as mDNS responder.
In Bonjour, I am getting answers to my query-in-answer section and additional information such as SRV, A, and AAAA records are in the additional section of the DNS packet.
But in Avahi, all PTR, SRV, A, AAAA are coming in the answer section.
How do I configure Avahi to generate responses the same way as Bonjour?
I'm also looking to make the same query in Avahi, with latency <= 2 msec.

Related

ASIC verification of a multiport switch

I have a DUT that can take packets from all 4 identical interfaces (A, B, C, D) Packets from one port can go to either one of the output ports (1,2,3,4). Example: Packets from A can go onto 1, 2, 3 or 4. Packets from port B can go to 1, 2, 3 or 4 and so on. Packets coming on the same port are in order but packets can be serviced in any order between A, B, C, D (no order is maintained between ports since all 4 interfaces can be active at the same time sending packets).
How do I verify such a DUT? What scoreboard data structure to use? I need to treat the DUT as a black box for I do not know how does the DUT decide which port to send the packets on. I have an uvm agent on each of the 4 interfaces A, B, C and D. A virtual sequence controls the sequences on all 4 agents.
Any inputs? Thanks in advance.
Your question is very broad and opinion based. You can only verify based on the requirements that are given to you. A packet that comes in has to come out intact. If there are no requirements about which ports they come through, then it should not matter to your testbench. There must be some other requirements that deal with throughput that you have not mentioned.
In the simplest situation, you need to make all your packets unique with a global packet ID so that you can send them to a common scoreboard, and at the end of the test, match up all the received packets with the sent packets. An associative array with the packed ID works well for this.

how to reserve rtp port when making a sip INVITE request

I am Developing a VOIP softphone, I need to put RTP port number in SDP part in my INVITE Request. how can I find a free UDP port number to accept RTP packets.
I have found 2 solutions but don't know if they are correct way to do this.
Solution 1 : start from a UDP port number (say 7000) and see if its free , if not increase by 1 and continue until a free port is found. then open a UDP socket on that port , so that other calls can't choose my calls RTP port.
then send the request.
Solution 2 : start from a UDP port number (say 7000) and see if it's free, put it in SDP and send the request. but when I get OK response from other party (after a while), there is no guarantee that the port number I announced for RTP is still available. maybe other call has captured that.
I would like to know what is the best way to do this.
As AymericM suggested, you should stick to your solution 1.
You need to use the bind call to bind a socket to a port.
Additionally, the RTP specification states that the RTP port should typically be even, with the RTCP port being the rtp_port + 1.
For UDP and similar protocols,
RTP SHOULD use an even destination port number and the corresponding
RTCP stream SHOULD use the next higher (odd) destination port number.
Even in the case where you support RTP/RTCP multiplexing over a single port, the answerer might not, so it might be a good idea to bind both the RTP and RTCP ports when generating the offer.
So to summarise, try to bind two consecutive ports starting on an even number and once you've found two suitable ports, generate the offer/INVITE.
Solution 1 is the only way to reserve a port number within a specific range of port.
If you do not care about being close to a specific port number, just open a port with value 0 in order to get a random port which will of course be free. Then, retrieve the real opened port with socket's API and use it in your sdp!

Specify which ethernet port to use for UDP writes when all have same IP addresses

I am working on an application that has a single server with 4 NIC ports that are to be configured for the same static IP address of 192.168.0.1 for all NICs and talk to 4 separate black boxes with each black box having the same static IP address of 192.168.0.2 on each box as well. The only difference in the communication to and from the black boxes would be the port numbers, such as box 1 would use port 2010 to listen to my data, while box 2 would use 2020, box 3 would use 2030 and so on with a similar pattern of the boxes transmitting back to the server as in ports 2110, 2120, 2130 and so on. The wiring interface between the server and the black boxes is one to one without any switches or hubs in between. That means Ethernet port eth1 goes straight to box 1, eth2 goes to box 2 and so on.
In my application design I will have different threats with separate socket instances for each port. The one thing I am unsure about is how to to specify which Ethernet interface the socket should use? I have read about the bind() in other threats, as one can specify the IP address of the source and the port and bind those to a socket, letting the underlying layers decide on the actual Ethernet adapter to use. Since I will be using datagram UDP packets, which are just sent out on the network regardless if the client is listening, a resolve of ip/port would not work here I would assume and I also do not want to spam the network with non destined packets as there will be lost of data flow across already. Also, this will be written under C++11 in a Windows environment using winsock2.
How would I go about specifying which eth interface/adapter to use for a particular socket in such instance?
And for those that will ask why I am doing it this way, I have no choice as it is an outside vendor's black box hardware that I have no control in specifying different IP addresses.
You can do this, but not with sockets, or even using the networking protocol stack of your host.
But you can send and receive complete packets from a particular interface, using a mechanism such as winpcap, tun/tap, or slirp. Actually a proper network test needs to do this anyway, because you will need to test the peer's ability to handle malformed packets, which the host networking stack will never generate.
Basic observation, your task is essentially equivalent to implementing bridging in user-mode, although you aren't selecting interface from a bridge learn table, the rest is the same. So take a look at some software that does user-mode bridging on Win32, for example coLinux.
If your requirements document actually says that it will be done using Winsock2, you're going to need to fight to get that changed before you have any hope of progress. (This is why requirements should specify goals, not means.)
You're exposing yourself to seven levels of hell by trying to do this in software.
IMHO, the simplest solution would be to put a trivial dual NIC gateway box on each of the four network segments that will translate from a separately configured subnet on each physical port to the (hidden) duplicate address on each black box.
+----------+ +-------+ +-------+
| | 172.16.n.1 | | 192.168.0.1 | |
| NIC n +------------------+ NAT +------------------+ BB |
| | 172.16.n.2 | | 192.168.0.2 | |
+----------+ +-------+ +-------+
The NAT box would have to proxy packets sent to 192.168.0.1 as if they came from 172.16.n.2 (or be otherwise configured to have the 172.16.n.1 as the target destination address) and you would need to have port forwarding configured to forward inbound packets to 172.16.n.2 to the hidden 192.168.0.2.

How to create connection with tcp server

I need to create a TCP Connection with a server running on device , I need to send the binary data after connection in following format.
Field Byte Value
1: packet id 1 0X01
2: length 1 2
3:Buadrate 4 The bit rate in bps used by the can
BUS. Maximum value is 1000000.
4:extended 1 If this is set to one the device will use
the extended frame format
Please help that how can create tcp connection with server running on 2000 port
Please help
I recommend that you look into using CocoaAsyncSocket which will ease some of it...
https://github.com/robbiehanson/CocoaAsyncSocket
https://github.com/robbiehanson/CocoaAsyncSocket/wiki/Intro
When you have your socket setup you can send (writeData:) your data as a C struct, but you might need to have the right endianess for the Baudrate. Check that.

Benefits of "Don't Fragment" on TCP Packets?

One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops).
I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet.
This affects 1 client in 5000, but since everybody's routes will be different this is expected.
The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer.
So... Are there any benefits OR justification to setting the DF flag on TCP packets for application data?
I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem.
If there's no good justification I will be changing the behaviour of our application.
Thanks in advance.
The DF flag is typically set on IP packets carrying TCP segments.
This is because a TCP connection can dynamically change its segment size to match the path MTU, and better overall performance is achieved when the TCP segments are each carried in one IP packet.
So TCP packets have the DF flag set, which should cause an ICMP Fragmentation Needed packet to be returned if an intermediate router has to discard a packet because it's too large. The sending TCP will then reduce its estimate of the connection's Path MTU (Maximum Transmission Unit) and re-send in smaller segments. If DF wasn't set, the sending TCP would never know that it was sending segments that are too large. This process is called PMTU-D ("Path MTU Discovery").
If the ICMP Fragmentation Needed packets aren't getting through, then you're dealing with a broken network. Ideally the first step would be to identify the misconfigured device and have it corrected; however, if that doesn't work out then you add a configuration knob to your application that tells it to set the TCP_MAXSEG socket option with setsockopt(). (A typical example of a misconfigured device is a router or firewall that's been configured by an inexperienced network administrator to drop all ICMP, not realising that Fragmentation Needed packets are required by TCP PMTU-D).
The operation of Path-MTU discovery is described in RFC 1191, https://www.rfc-editor.org/rfc/rfc1191.
It is better for TCP to discover the Path-MTU than to have every packet over a certain size fragmented into two pieces (typically one large and one small).
Apparently, some protocols like NFS benefit from avoiding fragmentation (link text). However, you're right in that you typically shouldn't be requesting DF unless you really require it.