libjrtp loosing packets when streaming h264 - streaming

I have several Axis IP-Cameras and I want to stream their H264 output over RTP to my application.
So far everything is working most of the time, usually with one camera. As soon as I attach more than one cam, I get lots of missing packets on every jrtplib instance I am using, resulting in bad video (artifacts, broken images, etc.).
So, I created a small test setup connecting just one camera and using just one jrtplib instance, with code more or less directly taken from the samples.
using namespace jrtplib;
RTPUDPv4TransmissionParams transparams;
RTPSessionParams sessparams;
RTPSession sess;
sessparams.SetOwnTimestampUnit(1.0 / 90000.0);
sessparams.SetAcceptOwnPackets(true);
transparams.SetPortbase(rtp_port);
auto status = sess.Create(sessparams, &transparams);
checkerror(status);
uint16_t last_sn = 0;
while (1)
{
sess.BeginDataAccess();
// check incoming packets
if (sess.GotoFirstSourceWithData())
{
do
{
RTPPacket *pack;
while ((pack = sess.GetNextPacket()) != NULL)
{
// You can examine the data here
auto sn = pack->GetSequenceNumber();
if (0!=last_sn && sn - last_sn != 1)
{
std::cout << "\tmissing packets: " << (sn - last_sn) << std::endl;
}
std::cout << sn << std::endl;
last_sn = sn;
// we don't longer need the packet, so
// we'll delete it
sess.DeletePacket(pack);
}
} while (sess.GotoNextSourceWithData());
}
sess.EndDataAccess();
status = sess.Poll();
checkerror(status);
Sleep(1);
}
sess.BYEDestroy(RTPTime(10, 0), 0, 0);
Even with this simple test, I get missing packets (missing sequence numbers), I also checked wether the missing sequence number are just delayed, but no.
But when I add transparams.SetRTPReceiveBuffer to a rather high value, like 1048576 bytes, it stops missing packets, at least for this sample.
In my real world code, increasing the receive buffer does not help. I also tried moving the session.Poll() to a separate thread.
Capturing UDP packets using Wireshark shows no dropped packets, so it´s something with libjrtp?
Does anyone have experience with this, or maybe even a suggestion for a another library to use? I am quite stuck at this point...
Thanks for any hints, maybe it is just a small issue and I just don´t see it
Regards

Related

NetworkingDriverKit - How can I access packet data?

I've been creating a virtual ethernet interface. I've opened asynchronous communication with a controlling application and every time there are new packets, the controlling app is notified and then asks for the packet data. The packet data is stored in a simple struct, with uint8_t[1600] for the bytes, and uint32_t for the length. The dext is able to populate this struct with dummy data every time a packet is available, with the dummy data visible on the controlling application. However, I'm struggling to fill it with the real packet data.
The IOUserNetworkPacket provides metadata about a packet. It contains a packets timestamp, size, etc, but it doesn't seem to contain the packet's data. There are the GetDataOffset() and GetMemorySegmentOffset() methods which seem to return byte offsets for where the packet data is located in their memory buffer. My instinct tells me to add this offset to the pointer of wherever the packet data is stored. The problem is I have no idea where the packets are actually stored.
I know they are managed by the IOUserNetworkPacketBufferPool, but I don't think that's where their memory is. There is the CopyMemoryDescriptor() method which gives an IOMemoryDescriptor of its contents. I tried using the descriptor to create an IOMemoryMap, using it to call GetAddress(). The pointers to all the mentioned objects lead to junk data.
I must be approaching this entirely wrong. If anyone knows how to access the packet data, or has any ideas, I would appreciate any help. Thanks.
Code snippet within IOUserClient::ExternalMethod:
case GetPacket:
{
IOUserNetworkPacket *packet =
ivars->m_provider->getPacket();
GetPacket_Output output;
output.packet_size = packet->getDataLength();
IOUserNetworkPacketBufferPool *pool;
packet->GetPacketBufferPool(&pool);
IOMemoryDescriptor *memory = nullptr;
pool->CopyMemoryDescriptor(&memory);
IOMemoryMap *map = nullptr;
memory->CreateMapping(0, 0, 0, 0, 0, &map);
uint64_t address = map->GetAddress()
+ packet->getMemorySegmentOffset();
memcpy(output.packet_data,
(void*)address, packet->getDataLength());
in_arguments->structureOutput = OSData::withBytes(
&output, sizeof(GetPacket_Output));
// free stuff
} break;
The problem was caused by an IOUserNetworkPacketBufferPool bug. My bufferSize was set to 1600 except this value was ignored and replaced with 2048. The IOUserNetworkPackets acted as though the bufferSize was 1600 and so they gave an invalid offset.
Creating the buffer pool and mapping it:
kern_return_t
IMPL(FooDriver, Start)
{
// ...
IOUserNetworkPacketBufferPool::Create(this, "FooBuffer",
32, 32, 2048, &ivars->packet_buffer));
packet_buffer->CopyMemoryDescriptor(ivars->packet_buffer_md);
ivars->packet_md->Map(0, 0, 0, IOVMPageSize,
&ivars->packet_buffer_addr, &ivars->packet_buffer_length));
// ...
}
Getting the packet data:
void FooDriver::getPacketData(
IOUserNetworkPacket *packet,
uint8_t *packet_data,
uint32_t *packet_size
) {
uint8_t packet_head;
uint64_t packet_offset;
packet->GetHeadroom(&packet_head);
packet->GetMemorySegmentOffset(&packet_offset);
uint8_t *buffer = (uint8_t*)(ivars->packet_buffer_addr
+ packet_offset + packet_head);
*packet_size = packet->getDataLength();
memcpy(packet_data, buffer, *packet_size);
}

STM32 Keil - Can not access target while debugging (AT Command UART)

I am trying to communicate with GSM module via UART communication. I could get message from the module as I expected. However when it comes to while loop (it is empty), debug session ends with "can not access target" error. Stepo by step, I am going to share my code:
Function 1 is AT_Send. (Note: Some of variables are declared globally.)
int AT_Send(UART_HandleTypeDef *huart, ATHandleTypedef *hat, unsigned char *sendBuffer, uint8_t ssize, unsigned char *responseBuffer, uint8_t rsize) {
if (HAL_UART_Transmit_IT(huart,sendBuffer,ssize) != HAL_OK) {
return -1;
}
while ((HAL_UART_GetState(huart) & HAL_UART_STATE_BUSY_TX) == HAL_UART_STATE_BUSY_TX) {
continue;
}
//;HAL_Delay(1000);
if (strstr((char*)receiveBuffer,(char*)responseBuffer) != NULL) {
rxIndex = 0;
memset(command, 0, sizeof(command));
return 0;
}
rxIndex = 0;
memset(command, 0, sizeof(command));
return 1;
}
Second function is AT_Init function. It sends AT to get OK response. At this point on, if I am not wrong, I am opening receive interrrupt and I am trying to get 1 byte.
int AT_Init(UART_HandleTypeDef *huart, ATHandleTypedef *hat)
{
HAL_UART_Receive_IT(huart,&rData,1);
tx = AT_Send(huart,hat,"AT\r",sizeof("AT\r\n"),"OK\r\n",sizeof("OK\r\n"));
return tx;
}
After these two functions, I am calling receive IT function in the call back while there are data on the bus.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == USART1){
command[rxIndex] = rData;
rxIndex++;
if((rxIndex == 2) && (strstr((char*)command,"\r\n") != NULL)) {
rxIndex = 0;
} else if (strstr((char*)command,"\r\n") != NULL) {
memcpy(receiveBuffer, command, sizeof(command));
rxIndex = 0;
memset(command,0,sizeof(command));
}
HAL_UART_Receive_IT(&huart1,&rData,1);
}
}
Moreover, I am going to send a few HTTP commands simultaneously if I can get rid of this problem.
Can anyone share his/her knowledge?
Edit: Main function is shown below
tx = AT_Init(&huart1,&hat);
while (1)
{
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_3);
HAL_Delay(500);
}
Edit 2: I had replaced uart channel by USART2, and debugger worked. I suppose that it is related to hardware. Still, I am curious about possible reasons that cause to this problem.
The question doesn't mention on which µC the program is running, I only see the "stm32" tag. Similarly, we don't know which debug protocol is used (JTAG or SWD?).
Still, I dare to guess that the toggle command for GPIO port PB3 in the main loop is causing the observations: On many (most? all?) STM32 controllers, PB3 is used as JTDO pin, which is needed for JTAG debug connections.
Please make sure to configure the debug connection to SWD (without SWO, i.e., neither SWV is correct). It may also help to check the wiring of the debug cable, the fast toggle at the PB3/JTDO line may influence the signal levels on some neighbouring SWD lines if the wiring is low quality or a fast SWD speed has been chosen.
My hypothesis can be falsified by removing all actions to PB3. If the problem remains, I'm wrong.

How to work out 'read/write' function using the libmodbus?(c code)

I'm gonna to read/write under the modbus-tcp specification.
So, I'm trying to code the client and server in the linux environment.
(I would communicate with the windows program(as a client) using the modbus-tcp.)
but it doesn't work as I want, so I ask you here.
I'm testing the client code for linux as a client and the easymodbus as a server.
I used the libmodbus code.
I'd like to read coil(0x01) and write coil(0x05).
When the code is executed using the libmodbus, 'ff' is printed out from the Unit ID part.(according to the manual, 01 should be output for modbus-tcp.
I don't know why 'ff' is printed(photo attached).
Wrong result:
Expected result:
'[00] [00] .... [00]' ; Do you know where to control this part?
Do you have or do you know the sample code that implements the 'read/write' function using the libmodbus?
please let me know the information, if you know that.
ctx = modbus_new_tcp("192.168.0.99", 502);
modbus_set_debug(ctx, TRUE);
if (modbus_connect(ctx) == -1) {
fprintf(stderr, "Connection failed: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
tab_rq_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rq_bits, 0, nb * sizeof(uint8_t));
tab_rp_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rp_bits, 0, nb * sizeof(uint8_t));
nb_loop = nb_fail = 0;
/* WRITE BIT */
rc = modbus_write_bit(ctx, addr, tab_rq_bits[0]);
if (rc != 1) {
printf("ERROR modbus_write_bit (%d)\n", rc);
printf("Address = %d, value = %d\n", addr, tab_rq_bits[0]);
nb_fail++;
} else {
rc = modbus_read_bits(ctx, addr, 1, tab_rp_bits);
if (rc != 1 || tab_rq_bits[0] != tab_rp_bits[0]) {
printf("ERROR modbus_read_bits single (%d)\n", rc);
printf("address = %d\n", addr);
nb_fail++;
}
}
printf("Test: ");
if (nb_fail)
printf("%d FAILS\n", nb_fail);
else
printf("SUCCESS\n");
free(tab_rq_bits);
free(tab_rp_bits);
/* Close the connection */
modbus_close(ctx);
modbus_free(ctx);
return 0;
That FF you see right before the Modbus function is actually correct. Quoting the Modbus Implementation Guide, page 23:
On TCP/IP, the MODBUS server is addressed using its IP address; therefore, the
MODBUS Unit Identifier is useless. The value 0xFF has to be used.
So libmodbus is just sticking to the Modbus specification. I'm assuming, then, that the problem is in easymodbus, which is apparently expecting you to use 0x01as the unit id in your queries.
I imagine you don't want to mess with easymodbus, so you can fix this problem pretty easily from libmodbus: just change the default unit id:
modbus_set_slave(ctx, 1);
You could also go with:
rc = modbus_set_slave(ctx, MODBUS_BROADCAST_ADDRESS);
ASSERT_TRUE(rc != -1, "Invalid broadcast address");
to make your client address all slaves within the network, if you have more than one.
You have more info and a short explanation of where this problem is coming from in the libmodbus man page for modbus_set_slave function.
For a very comprehensive example, you can check libmodbus unit tests
And regarding your question number 5, I don't know how to answer it, the zeros you mean are supposed to be the states (true or false) you want to write (or read) to the coils. For writing you can change them with the value field of function modbus_write_bit(ctx, address, value).
I'm very grateful for your reply.
I tested the read/write function using the 'unit-test-server/client' code you recommended.
I've reviewed the code, but there are still many things I don't know.
However, there is an address value that acts after testing each other with unit-test-server/client code and there is an address value that does not work
(Do you know why?).
-Checked and found that the UT_BITS_ADDRESS (address value) value operates from 0x130 to 0x150
-'error Illegal data address' occurs at values below -0x130 and above 0x150
-The address I want to read/write is 0x0001 to 0x0004(Do you know how to do?).
I want to know how to process and transmit data like the TX part of the right picture.
enter image description here
I'm running both client and server in my Linux environment and I'm doing read/write testing.
Among the wrong pictures...[06][FF]... <-- I want to know how to modify FF part (to change the value to 01 as shown in the picture)
enter image description here
and "modbus_set_slave" is the function for modbus rtu?
I'd like to communicate PC Program and Linux device in the end.
so Which part do I use that function?
I thanks for your concern again.

Extract frames from pcap files (tcpdump output) without using Libraries

I need to parse the pcap files and count the packets separately (TCP,UDP,IP). I found a lot of libraries for this like pcap, jnetpcap but I want to do this without using any external libraries.I do not need a code but a just a conceptual explanation.
Question
While parsing pcap files how should I distinguish between the frames(be it TCP,UDP,IP). I tried reading about the format but what I do not understand is how would I come to know about how many bytes should I read for a particular frame and how would i know what type of a frame is it.Because only once I am able to extract the packets separately I will be able to filter out other information.
You'd have to parse each frame separately and have a counter for each value you are trying to count. Assuming the capture you are examining is in pcap/pcapng format you might find libpcap helpful.
To give a quick run of what you might have to do (assuming the lower level is Ethernet without VLAN tags)
uint64_t ip_count, tcp_count, udp_count;
void parse_pkt(uint8_t *data, uint32_t data_len) {
uint8_t *ether_hdr = data;
uint16_t ether_type = ntohs(*(uint16_t *) (data + 12))
if (ether_type != 0x800) {
return;
}
ip_count += 1;
uint8_t *ip_hdr = data + 14;
protocol = ntohs(*(uint16_t *) (ip_hdr + 9))
//protocol is either udp/tcp/sctp...etc
if (protocol == 0x11) {
udp_count++;
} else if (protocol == 0x06) {
tcp_count++;
}
}
// foreach pkt from libpcap_open call parse_pkt with the data and data_len
This code is fragile. Jumping to direct offsets without the proper length and type checks is not a good idea.

NodeJS: What is the proper way to handling TCP socket streams ? Which delimiter should I use?

From what I understood here, "V8 has a generational garbage collector. Moves objects aound randomly. Node can’t get a pointer to raw string data to write to socket." so I shouldn't store data that comes from a TCP stream in a string, specially if that string becomes bigger than Math.pow(2,16) bytes. (hope I'm right till now..)
What is then the best way to handle all the data that's comming from a TCP socket ? So far I've been trying to use _:_:_ as a delimiter because I think it's somehow unique and won't mess around other things.
A sample of the data that would come would be something_:_:_maybe a large text_:_:_ maybe tons of lines_:_:_more and more data
This is what I tried to do:
net = require('net');
var server = net.createServer(function (socket) {
socket.on('connect',function() {
console.log('someone connected');
buf = new Buffer(Math.pow(2,16)); //new buffer with size 2^16
socket.on('data',function(data) {
if (data.toString().search('_:_:_') === -1) { // If there's no separator in the data that just arrived...
buf.write(data.toString()); // ... write it on the buffer. it's part of another message that will come.
} else { // if there is a separator in the data that arrived
parts = data.toString().split('_:_:_'); // the first part is the end of a previous message, the last part is the start of a message to be completed in the future. Parts between separators are independent messages
if (parts.length == 2) {
msg = buf.toString('utf-8',0,4) + parts[0];
console.log('MSG: '+ msg);
buf = (new Buffer(Math.pow(2,16))).write(parts[1]);
} else {
msg = buf.toString() + parts[0];
for (var i = 1; i <= parts.length -1; i++) {
if (i !== parts.length-1) {
msg = parts[i];
console.log('MSG: '+msg);
} else {
buf.write(parts[i]);
}
}
}
}
});
});
});
server.listen(9999);
Whenever I try to console.log('MSG' + msg), it will print out the whole buffer, so it's useless to see if something worked.
How can I handle this data the proper way ? Would the lazy module work, even if this data is not line oriented ? Is there some other module to handle streams that are not line oriented ?
It has indeed been said that there's extra work going on because Node has to take that buffer and then push it into v8/cast it to a string. However, doing a toString() on the buffer isn't any better. There's no good solution to this right now, as far as I know, especially if your end goal is to get a string and fool around with it. Its one of the things Ryan mentioned # nodeconf as an area where work needs to be done.
As for delimiter, you can choose whatever you want. A lot of binary protocols choose to include a fixed header, such that you can put things in a normal structure, which a lot of times includes a length. In this way, you slice apart a known header and get information about the rest of the data without having to iterate over the entire buffer. With a scheme like that, one can use a tool like:
node-buffer - https://github.com/substack/node-binary
node-ctype - https://github.com/rmustacc/node-ctype
As an aside, buffers can be accessed via array syntax, and they can also be sliced apart with .slice().
Lastly, check here: https://github.com/joyent/node/wiki/modules -- find a module that parses a simple tcp protocol and seems to do it well, and read some code.
You should use the new stream2 api. http://nodejs.org/api/stream.html
Here are some very useful examples: https://github.com/substack/stream-handbook
https://github.com/lvgithub/stick