There is a voice delay while using linphone-sdk - sip

I am doing various tests to make an application with linphone's android library. I I have a problem during testing (I am using EasyLinphone Demo).
When I make a call from Android to PC, the sound coming from my PC to Android sounds very delayed or does not sound for a few seconds. Through the example of Easylinphone between Android and Android (https://github.com/forever4313/EasyLinphone) this problem did not occur. However, there is a problem when Android calls the PC first. This delay is very serious for my work.
Looked into this problem and found the following logcat.
I would like to ask you for a suggestion on how to solve this problem.
logcat Message
-----------------------------------------------------------------------
07-05 04:53:51.800 4285-5556/com.xuchongyang.easylinphone E/huanyutong: rtp_parse: discarding too old packet (seq_num=13638, ts=22080)
07-05 04:53:51.820 4285-5556/com.xuchongyang.easylinphone E/huanyutong: rtp_parse: discarding too old packet (seq_num=13639, ts=22240)
07-05 04:53:51.940 4285-5556/com.xuchongyang.easylinphone I/huanyutong: Stun packet sent on rtcp for session [0x74e36ba0]
07-05 04:53:52.260 4285-4285/com.xuchongyang.easylinphone I/huanyutong: Bandwidth usage for call [0x74c5b620]:
RTP audio=[d= 80.0,u= 80.0], video=[d= 0.0,u= 0.0], text=[d= 0.0,u= 0.0] kbits/sec
RTCP audio=[d= 0.0,u= 0.8], video=[d= 0.0,u= 0.0], text=[d= 0.0,u= 0.0] kbits/sec
07-05 04:53:52.260 4285-4285/com.xuchongyang.easylinphone I/huanyutong: Thread processing load: audio=25.112965 video=0.000000 text=0.000000
07-05 04:53:52.450 4285-5556/com.xuchongyang.easylinphone I/huanyutong: Stun packet sent on rtcp for session [0x74e36ba0]
07-05 04:53:52.460 4285-5556/com.xuchongyang.easylinphone I/huanyutong: Sending RTCP SR compound message on session [0x74e36ba0].
07-05 04:53:52.460 4285-4285/com.xuchongyang.easylinphone I/huanyutong: MSAudio_stream_iterate[0x74bf3b20], local statistics available:
Local current jitter buffer size: 16777506.0ms

Related

I can not sent short messages by TCP protocol

I have a trouble to tune TCP client-server communication.
My current project has a client, running on PC (C#) and a server,
running on embedded Linux 4.1.22-ltsi.
Them use UDP communication to exchanging data.
The client and server work in blocking mode and
send short messages one to 2nd
(16, 60, 200 bytes etc.) that include either command or set of parameters.
The messages do note include any header with message length because
UDP is message oriented protocol. Its recvfrom() API returns number of received bytes.
For my server's program structure is important to get and process entire alone message.
The problem is raised when I try to implement TCP communication type instead of UDP.
The server's receive buffer (recv() TCP API) is 2048 bytes:
#define UDP_RX_BUF_SIZE 2048
numbytes = recv(fd_connect, rx_buffer, UDP_RX_BUF_SIZE, MSG_WAITALL/*BLOCKING_MODE*/);
So, the recv() API returns from waiting when rx_buffer is full, i.e after it receives
2048 bytes. It breaks all program approach. In other words, when client send 16 bytes command
to server and waits an answer from it, server's recv() keeps the message
"in stomach", until it will receive 2048 bytes.
I tried to fix it as below, without success:
On client side (C#) I set the socket parameter theSocket.NoDelay.
When I checked this on the sniffer I saw that client sends messages "as I want",
with requested length.
On server side I set TCP_NODELAY socket option to 1
int optval= 1;
setsockopt(fd,IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval);
On server side (Linux) I checked socket options SO_SNDLOWAT/SO_RCVLOWAT and they are 1 byte each one.
Please see the attached sniffer's log picture. 10.0.0.10 is a client. 10.0.0.106 is a server. It is seen, that client activates PSH flag (push), informing the server side to move the incoming data to application immediately and do not fill a buffer.
Additional question: what is SSH encrypted packets that runs between the sides. I suppose that it is my Eclipse debugger on PC (running server application through the same Ethernet connection) sends them. Am I right?
So, my problem is how to cause `recv() API to return each short message (16, 60, 200 bytes etc.) instead of accumulating them until receiving buffer fills.
TCP is connection oriented and it also maintains the order in which packets are sent and received.
Having said that, in TCP client, you will receive the stream of bytes and not the individual udp message as in UDP. So you will need to send the packet length and marker as the initial bytes.
So client can first find the packet length and then read data till packet length is reached and then expect new packet length.
You can also check for library like netty, zmq to do this extra work

Can Windivert injects packets larger than MTU?

I used winpcap and I got errors on "pcap_sendpacket", I fragmented the packet in little IP packets with the size of the MTU and did not work even wireshark didnt show errors in the packets which I fragmented.
Now I have this question, Can windivert inject packets larger than MTU? I need to know that before try to disable the "large send offload", if I disable that will me be able to send packet with winpcap larger than MTU, and with windivert? Is the only way to solve this?.
Sometimes in my program I have to fordward packet which I receive in winpcap with a size of 2300 bytes and my MTU has 1500 and It fails. If i receive the packet with windivert and send it with windivert will I have errors? Is a solution to disable the LSO?.
Regards.
Now I have this question, Can windivert inject packets larger than MTU?
Yes you should be able to "inject" it. However, the packet may be dropped (IPv6) or fragmented (IPv4) by the network en route to the destination.

Scapy Sends Malformed Packets

I'm sending out probe requests using scapy. It works perfectly fine on my desktop but when I send it out from scapy, using the exact same code, the packets arrive malformed. I'm watching them in wireshark.
The malformed one has a Logical-Link Control layer and the bits are all just out of order. I don't really know how else to put it. The source and destination mac addresses are both offset by a few bits. The packet is twice as large, I'm just really baffled.
For example
in scapy, my destination address is "aa:bb:cc:dd:ee:ff"
In the packet capture, the destination is "00:00:00:aa:bb:cc"
EDIT:
The packets show up fine on my laptop in wireshark, but in wireshark on my desktop is where there is an issue.
sendp(Dot11(addr1=dest,
addr2=source,
addr3=source)/
Dot11ProbeReq()/
Dot11Elt(ID="SSID",info='test')/
Dot11Elt(ID="Rates", info='\x02\x04\x0b\x16\x0c\x12\x18$')/
Dot11Elt(ID="ESRates", info='0H`l')/
Dot11Elt(ID="DSset", info='\x06'),
iface='wlan0', count=3)
EDIT: I believe the issue is because scapy is sending the wrong type/subtype.
The packet should have
Type/subtype: Probe Request (0x04)
but the packet in wireshark displays
Type/subtype: Data (0x20)
Monitor mode was not initiated correctly. The packets became malformed when not sent over a monitor interface.
try
sendp(RadioTap()/
Dot11(addr1=dest,
addr2=source,
addr3=source)/
Dot11ProbeReq()/
Dot11Elt(ID="SSID",info='test')/
Dot11Elt(ID="Rates", info='\x02\x04\x0b\x16\x0c\x12\x18$')/
Dot11Elt(ID="ESRates", info='0H`l')/
Dot11Elt(ID="DSset", info='\x06'),
iface='wlan0', count=3)

read() on a NON-BLOCKING tun/tap file descriptor gets EAGAIN error

I want to read IP packets from a non-blocking tun/tap file descriptor tunfd
I set the tunfd as non-blocking and register a READ_EV event for it in libevent.
when the event is triggered, I read the first 20 bytes first to get the IP header, and then
read the rest.
nr_bytes = read(tunfd, buf, 20);
...
ip_len = .... // here I get the IP length
....
nr_bytes = read(tunfd, buf+20, ip_len-20);
but for the read(tunfd, buf+20, ip_len-20)
I got EAGAIN error, actually there should be a full packet,
so there should be some bytes,
why I get such an error?
tunfd is not compatible with non-blocking mode or libevent?
thanks!
Reads and writes with TUN/TAP, much like reads and writes on datagram sockets, must be for complete packets. If you read into a buffer that is too small to fit a full packet, the buffer will be filled up and the rest of the packet will be discarded. For writes, if you write a partial packet, the driver will think it's a full packet and deliver the truncated packet through the tunnel device.
Therefore, when you read a TUN/TAP device, you must supply a buffer that is at least as large as the configured MTU on the tun or tap interface.

limitation of the reception buffer

I established a connection with a client this way:
gen_tcp:listen(1234,[binary,{packet,0},{reuseaddr,true},{active,false},{recbuf,2048}]).
This code performs message processing:
loop(Socket)->
inet:setops(Socket,[{active,once}],
receive
{tcp,Socket,Data}->
handle(Data),
loop(Socket);
{Pid,Cmd}->
gen_tcp:send(Socket,Cmd),
loop(Socket);
{tcp_close,Socket}->
% ...
end.
My OS is Windows. When the size of the message is 1024 bytes, I lose bytes in Data. The server sends ACK + FIN to the client.
I believe that the Erlang is limited to 1024 bytes, therefore I defined recbuf.
Where the problem is: Erlang, Windows, hardware?
Thanks.
You may be setting the receive buffer far too small. Erlang certainly isn't limited to a 1024 byte buffer. You can check for yourself by doing the following in the shell:
{ok, S} = gen_tcp:connect("www.google.com", 80, [{active,false}]),
O = inet:getopts(S, [recbuf]),
gen_tcp:close(S),
O.
On Mac OS X I get a default receive buffer size of about 512Kb.
With {packet, 0} parsing, you'll receive tcp data in whatever chunks the network stack chooses to send it in, so you have to do message boundary parsing and buffering yourself. Do you have a reliable way to check message boundaries in the wire protocol? If so, receive the tcp data and append it to a buffer variable until you have a complete message. Then call handle on the complete message and remove the complete message from the buffer before continuing.
We could probably help you more if you gave us some information on the client and the protocol in use.