Why do we need SIP "100 Trying" response over TCP? - sip

SIP over UDP: It's necessary to have SIP response "100 Trying" for SIP over UDP to shut the Timer-A off that would have been started by caller and hence stopping the re-transmission of the SIP message. Its really important because other responses (provisional and final) might take a while for initial INVITE message as we have to consider the scenario of forking, UE-B not reachable, fallback... etc It might take some time.
SIP over TCP: Timer-A will not be started by caller and thus no re-transmission of message. TCP being reliable, not re-transmission required. Even then, why do most implementation sends 100 Trying over TCP ???

There are few reasons that 100 Trying is still needed for SIP over TCP.
Having a TCP Connection does not guarantee that the SIP Application is working or if its a SIP - Aware application at all. The 100 Trying provides you the feedback that your request is being processed by a SIP Application.
The lack of 100 Trying can also be the right trigger for not just re-transmissions but to re-attempt to maybe a different server in the configuration. You may not want to elapse 32 seconds for every Server in configuration even when the connection is TCP.
In deployment scenarios, if there are elements like a SBC or Load Balancer, the TCP Connection is established with them. The Application behind it can be a different entity and usually these edge elements pass on all messaging or generate messaging to indicate the call in action state.

Probably because it makes the SIP stack implementation easier. It makes life easier if the SIP transaction layer is the same irrespective of the SIP transport that is used. If the transaction layer has different rules for different transports that's extra code for no real benefit, i.e. the bandwidth save by not sending the 100 Trying response is negligible in the scheme of things.

Related

Do HTTP clients always wait for a response on a single TCP connection?

This is a purely curiosity-driven question about some subtle issue on the border between HTTP and TCP. I have no concrete problem to solve.
An HTTP request is done over a TCP connection, and a single TCP connection can be used for multiple HTTP requests in a row.
In principle, this means that the client can send a request on a connection before the response for the previous one arrived.
The interesting part is that such multiple requests can really end up being in the same IP packet, and theoretically even the multiple responses could be - de facto batching the requests.
I've come accross this topic while looking at the Techempower benchmarks which include a "plaintext" benchmark where 10 such requests are batched together in one send operation (the use the wrk tool to do this).
I'm wondering if this is a purely artificial hack or whether this actually happens, for instance when a browser requests mutliple resources from the same server.
Also, can one do this with the HTTP clients of common programming languages, or would one have to go to TCP sockets to get that behavior?
Sending multiple HTTP/1.1 requests without waiting for the response is known as HTTP pipelining (wikipedia link).
As you can read on wikipedia, the technique is promising but it is not enabled by default in browsers "due to several issues including buggy proxy servers and HOL blocking." Nevertheless there is support for it in major HTTP clients and servers.
The technique is not applicable to later versions of the protocol: HTTP/2 uses the TCP connection in a fundamentally different way, and HTTP/3 does not even use TCP.

How websockets work in respect to TCP/IP and HTTP?

Hi guys I'm new to understanding protocols used over the web and need some help in understanding the basics of websockets,TCP/IP and HTTP.
My understanding of the relation between TCP/IP and HTTP is that IP is required to connect all networks. TCP is a mechanism that allows us to transfer data safely and HTTP, which utilizes TCP to transfer its data, is a specific protocol used by Web servers and clients.
Does this mean you can't send a HTTP request without TCP?
Websockets communicate using TCP layer and a connection between client and server is established through HTTP which is known as the handshake process.
Does websockets have its own protocol? How can you send a http request(hand shake process) to establish TCP/IP when you need TCP to carry out an HTTP request. I know I am missing something really important here and would be great to get my understanding of these protocols sharpened!
Firstly, IP is not necessarily required to connect all networks. However, it is the most widely used and adopted today (For now that is). Old network protocols such as Appletalk, IPX, and DECNet are all legacy network protocols that are not much used anymore, however they are still around to an extent. Don't forget IPv6 is out there as well, in some places, and can ride over IPv4 networks if your configuration is done properly.
When you say TCP being "safe", I would give it another word, and that would be intelligent. TCP is a transport protocol, and is the header that comes directly after the IPv4 header. TCP is primarily used for flow control and has become very efficient at error recovery in case a part of a packet or packets has been last when transferring/receiving. While this is great for some transactions, the error control requires an additional amount of overhead in the packet. Some applications, let's say VoIP for example, is very sensitive to delay, jitter (Variation in delay) and congestion. This is why it uses UDP.
Like TCP, UDP is a transport protocol, however there is no flow control. Think of it this way: When sending packets over TCP, it's like asking the other end if they received your message. If they did, they will acknowledge it. If not, you now have to determine how you will resend this information. UDP has none of this. You send your message to the other side, and hope it gets there.
Now if you want to talk about "safe" protocols, this is usually done at either the network layer (IPSec) or the application layer (SSL). Safe typically means secured.
A usual TCP three-way handshake looks like this:
Whoever sends the SYN is the client. Whoever receives that initial SYN is the server.
Client sends SYN --> Server
Now, if the server is listening, and/or there's not a firewall blocking the service (Which in that case you'd receive a TCP frame from the server with the RST,ACK bits set most likely), the server will respond with a SYN-ACK:
Server sends SYN/ACK --> Client
If the client received this packet, he will acknowledge he received it. This completes the three-way handshake and these two may begin exchanging information.
Client sends ACK --> Server
Here's a good site for some info:
http://www.tcpipguide.com/free/index.htm

Fast http communication

I want an http message to be sent and processed quickly by a remote server, via an already established persistent TCP connection
How can I optimize the communication?
I have a few ideas, but I am not knowledgeable enough about networking to know if they make sense:
HTTP sits on top of TCP. But how does it work exactly? Specifically, if I send 1 http message, does it translate into only 1 tcp message? (I know the initial handshake takes 3 round trip time, but I do not care about this, as the connection is already established). I guess it depends on the Maximum Segment Size that the server can accept?
Can I ask the server for a bigger maximum segment size if needed? How can I do it (I use python, httplib and socket modules, it would be ideal in this language).
The remote server works with TCP, but could I try sending it UDP messages? I know UDP is faster, but could this idea work?
I'll allow myself to comment/answer in-text:
I have a few ideas, but I am not knowledgeable enough about networking
to know if they make sense:
Exactly! Read up on TCP and HTTP, on wikipedia, for a starter. It will make things much easier to discuss. Probably also faster than asking on stackoverflow ;)
HTTP sits on top of TCP. But how does it work exactly?
Well, exactly like protocols work over each other in a layered protocol stack. Read Wikipedia's TCP article, and about the OSI/ISO layer model.
Specifically, if I send 1 http message, does it translate into only 1
tcp message?
No. HTTP itself doesn't care (and it doesn't have to) into how many lower level packets communication gets split.
(I know the initial handshake takes 3 round trip time,
but I do not care about this, as the connection is already
established).
3 time? That's not something that makes sense. Read about the TCP handshake and how HTTP asks for a document.
I guess it depends on the Maximum Segment Size that the
server can accept?
Among a lot of different other factors; really: HTTP doesn't care the least!
Can I ask the server for a bigger maximum segment size if needed?
No. Your network stack will most probably automatically use the biggest MTU that works.
How can I do it (I use python, httplib and socket modules, it would be
ideal in this language).
The remote server works with TCP, but could I try sending it UDP messages?
There are some specialized HTTP-over-UDP protocols, but they are not HTTP. Generally, HTTP is spoken over TCP, but again, the internet works on a layered protocol stack, and higher level protocols usually don't care what transports their data -- you could perfectly well have an HTTP session over carrier pidgeons!
I know UDP is faster, but could this idea work?
It's not. That's a misconception. UDP doesn't have automatic re-requesting for packets that got lost along the way, which, for things like multimedia or games might make sense, but using TCP gives you an ordered session, which is necessary for HTTP.

Does data loss happens in fast sender and very slow receiver?

I have an application consisting of client and server by making use of sockets.
On the server side in the thread where it is receiving messages from client i have made a sleep call for 10 sec.Now when i send messages from client 1000000 times to server then messages being received from server is very slowly.My question is as follows:
-Does it mean that the receiving call on server side is blocking call?
-Secondly,is there any good document which can make me understand better the blocking and non blocking behavior of send and receive call of sockets.
Depends on whether you're using TCP or UDP sockets. TCP guarantees delivery, UDP doesn't. So in a UDP application packets can be dropped for any number of reasons, including if the servers sends too quickly to the client.
By default, calls on sockets are blocking calls. You have to set non-blocking explicitly.

UDP Response

UDP doesnot sends any ack back, but will it send any response?
I have set up client server UDP program. If I give client to send data to non existent server then will client receive any response?
My assumption is as;
Client -->Broadcast server address (ARP)
Server --> Reply to client with its mac address(ARP)
Client sends data to server (UDP)
In any case Client will only receive ARP response. If server exists or not it will not get any UDP response?
Client is using sendto function to send data. We can get error information after sendto call.
So my question is how this info is available when client doesn't get any response.
Error code can be get from WSAGetLastError.
I tried to send data to non existent host and sendto call succeeded . As per documentation it should fail with return value SOCKET_ERROR.
Any thoughts??
You can never receive an error, or notice for a UDP packet that did not reach destination.
The sendto call didn't fail. The datagram was sent to the destination.
The recipient of the datagram or some router on the way to it might return an error response (host unreachable, port unreachable, TTL exceeded). But the sendto call will be history by the time your system receives it. Some operating systems do provide a way to find out this occurred, often with a getsockopt call. But since you can't rely on getting an error reply anyway since it depends on network conditions you have no control over, it's generally best to ignore it.
Sensible protocols layered on top of UDP use replies. If you don't get a reply, then either the other end didn't get your datagram or the reply didn't make it back to you.
"UDP is a simpler message-based connectionless protocol. In connectionless protocols, there is no effort made to set up a dedicated end-to-end connection. Communication is achieved by transmitting information in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information."
The machine to which you're sending packets may reply with an ICMP UDP port unreachable message.
The UDP protocol is implemented on top of IP. You send UDP packets to hosts identified by IP addresses, not MAC addresses.
And as pointed out, UDP itself will not send a reply, you will have to add code to do that yourself. Then you will have to add code to expect the reply, and take the proper action if the response is lost (typically resend on a timer, until you decide the other end is "dead"), and so on.
If you need reliable UDP as in ordering or verification such that TCP/IP will give you take a look at RUDP or Reliable UDP. Sometimes you do need verification but a mixture of UDP and TCP can be held up on the TCP reliability causing a bottleneck.
For most large scale MMO's for isntance UDP and Reliablity UDP are the means of communication and reliability. All RUDP does is add a smaller portion of TCP/IP to validate and order certain messages but not all.
A common game development networking library is Raknet which has this built in.
RUDP
http://www.javvin.com/protocolRUDP.html
An example of RUDP using Raknet and Python
http://pyraknet.slowchop.com/