SIP over TLS bandwidth consumption - sip

let's say I have connection using SIPS (SIP secure) using g729 codec.
Does anyone know how many bandwidth it takes?
I know call using g729 codec 10ms is about 11Kb bandwidth consumption.

According to me:
SIP or SIPS use almost the same bandwidth. It will makes no difference.
The SIP bandwidth compared to RTP is insignificant.
Imagine you have a one minute call. The total exchange would be, for example:
For SIP:
1000 bytes for INVITE
1000 bytes for 200 OK for INVITE
500 bytes for ACK
500 bytes for BYE
500 bytes for 200 Ok for BYE
total = 3500 bytes
For RTP and g729, with 10ms:
Each of my RTP packet is 22 bytes. (not including UDP headers)
G729 payload: 10 bytes
RTP header: 12 bytes
total = 100 * 22 = 2200 bytes/second (which is 17,6kb/s)
total = 100 * 22 * 60 = 132000 bytes for a one minute call
For only one minute, the ratio is already
132000/(132000+3500) = 97,4%
3500/(132000+3500) = 2,6%
If you have longuer call duration, the sip related bandiwdth would drop quickly under 1%.
If you have frequent SIP messages during the calls (like INFO), may be you can take them into account, but this is usually not the case.
NOTE: I used an 8kbit/s G729 stream encoder instead of 11kbit/s. Just replace with your own values.
EDIT:
With usual SRTP encryption method, if you use SRTP, the encrypted payload will remain the same size. However, an additionnal authentication tag is usually used. With AES_CM_128_HMAC_SHA1_80 being used, 10 bytes will be added to each packet.

Related

What happens when a process tries to read more bytes than the one that sent it

If Two processes communicate via sockets and Process A sends Process B 100 bytes.
Process B tries to read 150 bytes. Later Process A sends 50 bytes.
What is the result of Process B's read?
Will the process B read wait until it receives 150 bytes?
That is dependent on many factors, especially related to the type of socket, but also to the timing.
Generally, however, the receive buffer size is considered a maximum. So, if a process executes a recv with a buffer size of 150, but the operating system has only received 100 bytes so far from the peer socket, usually the available 100 are delivered to the receiving process (and the return value of the system call will reflect that). It is the responsibility of the receiving application to go back and execute recv again if it is expecting more data.
Another related factor (which will not generally be the case with a short transfer like 150 bytes but definitely will if you're sending a megabyte, say) is that the sender's apparently "atomic" send of 1000000 bytes will not all be delivered in one packet to the receiving peer, so if the receiver has a corresponding recv with a 1000000 byte buffer, it's very unlikely that all the data will be received in one call. Again, it's the receiver's responsibility to continue calling recv until it has received all the data sent.
And it's generally the responsibility of the sender and receiver to somehow coordinate what the expected size is. One common way to do so is by including a fixed-length header at the beginning of each logical transmission telling the receiver how many bytes are to be expected.
Depends on what kind of socket it is. For a STREAM socket, the read will return either the amount of data currently available or the amount requested (whichever is less) and will only ever block (wait) if there is no data available.
So in this example, assuming the 100 bytes have (all) been transmitted and received into the receive buffer when B reads from the socket and the additional 50 bytes have not yet been transmitted, the read will return those 100 bytes and will not wait.
Note also, the dependency of all the data being transmitted and received -- when process A writes data to a socket it will not necessarily be sent immediately or all at once. Depending on the underlying transport, there's an MTU size and any write larger than that will be broken up. Smaller writes may also be delayed and combined with later writes to make up the MTU. So in your case the send of 100 bytes might be too large (and broken up), or might be too small and not be transmitted immediately.

Protocol with small(est) overhead over GPRS

Our company developed stations that collect data in an agricultural field. These field could be in the middle of nowhere, so stations use GSM/GPRS with SIM card that automatically switches to strongest provider.
Every 5 minutes, an internet connection is setup to send data to a server. The data has a structure with packet length, command, sensor data and crc check. But these data structures are sent with a http post to an url.
For 480 bytes of data, about 2550 bytes of data traffic is used. There is a lot of overhead in the HTTP protocol. Because we only need to send 480 bytes of data, we have 80% overhead with the post over http. Now we have a few hundreds of stations and that number is growing. So costs for data traffic are increasing rapidly.
We want to do a redesign of the transmission of data. The data is send by microchip microprocessor in the stations.
Our goal is to decrease the overhead as much as possible, with guaranteed data delivery. So I looked into TCP and UDP.
TCP has failure detection and recovery, but has a higher overhead.
UDP has lower overhead, but there it is not guaranteed that data is delivered without failure.
My first idea is to build a server that listens to a TCP port. And stations sent the data over TCP. Mainly because of guarantied data delivery.
With UDP we have to develop check and resend of data ourself, but the data structure of our records is already prepared to do checks.
So I am really in doubt what to do. And I am trying to get an answer on these questions:
How many bytes overhead would it take by TCP and UDP to sent (and deliver) 480 bytes of data?
Are TCP and UDP the best ways to consider for sending 480 bytes of data, or is there a smarter solution with even lower overhead?
How many bytes overhead would it take by TCP and UDP to sent (and deliver) 480 bytes of data?
A (typical) TCP header is 20 bytes long although it can be (slightly) longer with options. If the entire 480 bytes are sent in a single TCP segment, you'd end up with 480 +20 + 20 (IP header) = 520 bytes before layer-2 overhead.
UDP has an 8 byte header, so for UDP you'll have 480 + 8 + 20 = 508 bytes.
However you should to consider that TCP is a stream protocol. Reading from a TCP socket is like reading from a binary file - you'd need to split that stream into individual messages yourself by using some sort of delimiter or prepending the length of the message to each message.
UDP on the other hand works on individual messages. Reading from a UDP socket would return messages one at a time.
Are TCP and UDP the best ways to consider for sending 480 bytes of data, or is there a smarter solution with even lower overhead?
UDP and TCP are the lowest level transport protocols on the internet. HTTP and other high-level protocols are built on top of them. If the size of data is critical, raw TCP and UDP are as low-overhead as you're going to get without using RAW sockets and embedding your data directly into IP packets.

Can UDP packets be partially sent like TCP ones?

I've created some type of client/server application that has its own data ACK system. It was originally written in TCP because of some limitations, but the base was written thinking about UDP.
The packets that I sent to the server had their own encapsulation (packet id and packet size headers. I know that UDP has also a checksum so I didn't add a header for that), but how TCP works, I know that the server may not receive the entire packet, so I gathered and buffered the data received until a full valid packet was received.
Now I have the chance to change my client/server program to UDP, and I know that one difference with TCP is that data is not received in the same order as sent (which is why I added a packet id header).
The thing that I want to know is: If I send multiple packets, will they be received with no guaranteed order but with guaranteed encapsulation? I mean, if I send a packet sized 1000 bytes of data and another packet sized 400 bytes of data later, will the server receive 2 packets, one of 1000 bytes and another one of 400 bytes, or is there a chance to receive 200 of that 1000 bytes, then 400 bytes of that 1000 bytes and later the rest of the bytes like TCP does?
UDP is a datagram service. Datagrams may be split for transport, but they will be reassembled before being passed up to the application layer.
With small packet sizes you should have no concern that packets will be broken into multiple packets. That generally is only an issue when the packet gets over an Ethernet network.
You ask" will the server receive 2 packets, one of 1000 bytes and another one of 400 bytes, or there's a chance to receive 200 of that 1000 bytes, then 400 bytes of that 1000 bytes and later the rest of the bytes like TCP can do?
With a packet size of under 1492 bytes there is not going to be any partial packets.
UPDATE:
Apparently I see a need to clarify why I say UDP packet lengths 1492 bytes or less will not affect transport robustness.
The maximum UDP length as implicitly specified in RFC 768 is 65535 including the 8 byte Header. Max Payload Frame Length is 65527 bytes.
While this should not be disputed number the UDP data length is often reported incorrectly. This is exemplified in a previous post:
What is the largest Safe UDP Packet Size on the Internet
A data packet is not constrained by the MTU of the underlying network ToS or communications protocol's Frame length (e.g. IP and Ethernet respectively). Discrepancies between MTU and Protocol Lengths are remedied by Fragmentation and Reassembly
At the Transport Layer each network Type of Service (ToS) has a specific Maximum Transmission Unit (MTU). UDP is encapsulated within IP Packets and IP Packets are encapsulated by the transporting Network's ToS. IP packets are often transmitted through networks of various ToS which include Ethernet, PPP, HDLC, and ADCCP.
When the MTU for a receiving Network ToS is less than the sending ToS then the receiving network must Fragment the received packet. When the Network sends a packet to a network with a higher MTU, the receiving Network must reassemble any Fragmented packets.
Ethernet is the defacto mainstream protocol with the lowest MTU. Non-Mainsteam Arcnet the MTU is 507 bytes. The practical lowest MTU is Ethernet's 1500 bytes, minus the overhead makes maximum payload length 1492 bytes.
If the UDP packet has more than 1492 bytes the data packet will likely be Fragmented and Reassembled. The Fragment and Reassembly adds complexity to the already complex process coupling UDP and IP, and therefore should be avoided.
Because UDP is a non-guaranteed datagram delivery protocol it boosts the transport performance. Robustness is left to the originating and terminating Application. RFC 1166 sets the standards for the communication protocol link layer, IP layer, and transport layer, the UDP Application is responsible for packetization, reassembly, and flow control.
The maximum UDP packet size can also be lowered by a Communication Host's Application Layer. The packet length is a balance between performance and robustness.
The Communications Host's Application Layer may set a maximum UDP packet size. The typical UDP max data length at the Application layer will use the maximum allowed by the IP protocol or the Host Data Link Layer, typically Ethernet.
It is the Application's programer that chooses to use the Host Application Layer or the Host Data Link layer. The Host Application Layer will detect UDP packet errors and discard the packet if necessary. When the application communicates directly with the Host Data Link, the application then has the responsibility of detecting packet errors.
Using maximum UDP data packet length of Ethernet's max payload length of 1492 bytes will eliminate the issues of Fragmentation and Delivery Order of multiple Frames.
That is why I said packet length is not a Fragmentation issue with packet lengths of 1000 and 400 bytes.
###
I do not know what you mean by "guaranteed encapsulation", it makes no sense to me.
With IP there is no guarantee of packet delivery of the order whether UDP or TCP.
As long as you control both sides of the conversation, you can work out your own protocol within the data packet to handle ordering and post packets. Reserve the first x bytes of the packet for a sequential order number and total number of packets. (e.g. 1 of 3, 2 of 2, 3 of 3). If the client side is missing a packet then the client must send a request for retransmission. You need to determine to what level you are going to go for data integrity. like maybe the re-transmission packet is lost.
That may be what you meant by "guaranteed encapsulation", Where there is other information within your datagram packet to ensure some integrity. You should add your own CRC for the total data being sent if broken into multiple datagrams. the checksum is not very robust and is only for the one packet.
UDP is much faster then TCP but TCP has flow control and guaranteed delivery.
UDP is good for streaming content like voice where a lost packet is not going to matter.
Network reliability has improved a lot since the days when these issues were a major concern.

What percentage of the network bandwidth "on the wire" is used for the message data?

A 1000 byte message is sent over the network using a protocol stack with HTTP, TCP, IP, and Ethernet. Each protocol header is 20 bytes long. What percentage of the network bandwidth "on the wire" is used for the message data? Give a numeric answer only.
total network bandwith on the wire -> 1000 byte message + 80 byte protocol headers = 1080 bytes.
So the percentage for the message is 93.

Bandwidth save GPRS and TCP

Hello i have made a program for my old windows mobile phone to send gps data ,temperature etc every 5 seconds just for experimental reasons to create a fleet management system.
I noticed that within one hour 350kb were consumed although i sent only 20kb of data...
As i dont have deep knowledge in networks ,how much does a tcp connection cost in bytes?
Maybe i should keep the socket alive because i close and open it every 5 seconds.Would that save bytes?
Also does MTU matter here?
Any other idea to reduce the overhead?
thank you
Let's do some math here.
Every 5 seconds is 720 connections per hour plus data. 20K / 720 is about 28 bytes of payload (your GPS data) for each connection.
IP and TCP headers along are 48 bytes in addition to whatever data is being sent.
3-way handshake connection: 3 packets (2 out, 1 in) == 96 bytes out and 48 bytes in
Outbound Data-packet: 48+28 bytes == 76 bytes (out)
Inbound Ack: 48 bytes (in)
Close: 48 bytes (out)
Final Ack: 48 bytes (in)
Total out per connection: 220
Total in per connection: 144
Total data send/received per connection: 220+144 = 364
Total data usage in one hour = 364 * 720 = 262K
So I'm in the ballpark of your data usage estimates.
If you're looking to reduce bandwidth usage, here's three ideas:
Scale back on your update rate.
Don't tear down the socket connection each time. Just keep it open.
Given your GPS coordinates are periodically updated, you could consider using UDP instead of TCP. There's potential for packet loss, but given you're retransmitting fresher data every 5 seconds anyway, an update getting lost isn't worth the bandwidth to retransmit. IP and UDP headers combined are only 28 bytes with no "connection" overhead.
UPDATE
When I originally posted this, I erroneously misunderstood the connection close to be a single exchange of FIN packets between client and server. In practice, the client sends a FIN as part of it initiating the CLOSE. Then server ACKs the FIN. Then the server sends its own FIN that is ACK'd by the client. In other words, an additional 96 bytes per connection. Redoing our math:
Total data send/received per connection =
220+48 + 144+48 = 460
Total data usage in one hour = 460 * 720 = 331K
So my revised estimate of 331KB in one hour is a bit closer to what the OP saw.