NetworkingDriverKit - How can I access packet data? - driverkit

I've been creating a virtual ethernet interface. I've opened asynchronous communication with a controlling application and every time there are new packets, the controlling app is notified and then asks for the packet data. The packet data is stored in a simple struct, with uint8_t[1600] for the bytes, and uint32_t for the length. The dext is able to populate this struct with dummy data every time a packet is available, with the dummy data visible on the controlling application. However, I'm struggling to fill it with the real packet data.
The IOUserNetworkPacket provides metadata about a packet. It contains a packets timestamp, size, etc, but it doesn't seem to contain the packet's data. There are the GetDataOffset() and GetMemorySegmentOffset() methods which seem to return byte offsets for where the packet data is located in their memory buffer. My instinct tells me to add this offset to the pointer of wherever the packet data is stored. The problem is I have no idea where the packets are actually stored.
I know they are managed by the IOUserNetworkPacketBufferPool, but I don't think that's where their memory is. There is the CopyMemoryDescriptor() method which gives an IOMemoryDescriptor of its contents. I tried using the descriptor to create an IOMemoryMap, using it to call GetAddress(). The pointers to all the mentioned objects lead to junk data.
I must be approaching this entirely wrong. If anyone knows how to access the packet data, or has any ideas, I would appreciate any help. Thanks.
Code snippet within IOUserClient::ExternalMethod:
case GetPacket:
{
IOUserNetworkPacket *packet =
ivars->m_provider->getPacket();
GetPacket_Output output;
output.packet_size = packet->getDataLength();
IOUserNetworkPacketBufferPool *pool;
packet->GetPacketBufferPool(&pool);
IOMemoryDescriptor *memory = nullptr;
pool->CopyMemoryDescriptor(&memory);
IOMemoryMap *map = nullptr;
memory->CreateMapping(0, 0, 0, 0, 0, &map);
uint64_t address = map->GetAddress()
+ packet->getMemorySegmentOffset();
memcpy(output.packet_data,
(void*)address, packet->getDataLength());
in_arguments->structureOutput = OSData::withBytes(
&output, sizeof(GetPacket_Output));
// free stuff
} break;

The problem was caused by an IOUserNetworkPacketBufferPool bug. My bufferSize was set to 1600 except this value was ignored and replaced with 2048. The IOUserNetworkPackets acted as though the bufferSize was 1600 and so they gave an invalid offset.
Creating the buffer pool and mapping it:
kern_return_t
IMPL(FooDriver, Start)
{
// ...
IOUserNetworkPacketBufferPool::Create(this, "FooBuffer",
32, 32, 2048, &ivars->packet_buffer));
packet_buffer->CopyMemoryDescriptor(ivars->packet_buffer_md);
ivars->packet_md->Map(0, 0, 0, IOVMPageSize,
&ivars->packet_buffer_addr, &ivars->packet_buffer_length));
// ...
}
Getting the packet data:
void FooDriver::getPacketData(
IOUserNetworkPacket *packet,
uint8_t *packet_data,
uint32_t *packet_size
) {
uint8_t packet_head;
uint64_t packet_offset;
packet->GetHeadroom(&packet_head);
packet->GetMemorySegmentOffset(&packet_offset);
uint8_t *buffer = (uint8_t*)(ivars->packet_buffer_addr
+ packet_offset + packet_head);
*packet_size = packet->getDataLength();
memcpy(packet_data, buffer, *packet_size);
}

Related

How to pass user space data to dmaengine client usage call?

[EDITED]
I have a board on arm64 with fpga (SoC).
The task is simple:
make possible to transfer data from/to "User Space" area (app) to/from "Kernel Space" phys mem (device mem = fpga regs), with and without dma support usage (streaming type). That dma is in the board (ZynqMP / GDMA).
I will have several devices - on fpga and outside, which should use this communication, but now I'm working only with fpga-ddr4 mem area.
Now I see the next logic flow:
some initialization (dma parameters and so on);
ioremap() a fpga device area;
make a buffer (by kzalloc() or another) - this buffer I should give to the US by mmap fops;
make a scatterlist from the buffer (pseudo-code below);
use the scatterlist with dmaengine to transfer data;
// scatterlist init pseudo-code
struct scatterlist sgl[2];
struct scatterlist *sge;
int i, buf_n, err_code;
__u8 *buffer; // allocated earlier
sg_init_table(sgl, ARRAY_SIZE(sgl));
for_each_sg(sgl, sge, ARRAY_SIZE(sgl), i) {
struct page *pg = virt_to_page(buffer + i * PAGE_SIZE);
dma_addr_t dma_handle = dma_map_page(&pdev->dev, pg, 0, PAGE_SIZE, direction /* DMA_TO_DEVICE */);
if ((err_code = dma_mapping_error(&pdev->dev, dma_handle))) {
dev_err(&pdev->dev, "dma page mapping failed! (code: %i)\n", err_code);
break;
}
sg_set_page(sge, pg, PAGE_SIZE, 0);
}
dma_map_sg(&pdev->dev, sgl, ARRAY_SIZE(sgl), direction) // with appropriate check
Now I misunderstanding next - how or where the destination controled? I mean, I had allocated the buffer in RAM, make scatterlist from it and give this list by argument of dmaengine funtions for transferring. But I dont set/use ioremapped device mem area to save this buffer data! Is this dma works only with appropriate RAM memory area and I should copy buffer to the device area? Or, should I use the ioremapped area as my buffer?
Is it right flow? Explain me my mistakes pleaes?

Detecting CAN bus errors under socketCAN linux driver

Our products are using a well known CANopen stack, which uses socketCAN, on an embedded Beaglebone Black based system running under Ubuntu 14.04 LTS. But for some reason, even though the stack we're using will detect when the CAN bus goes into a PASSIVE state or even a BUS OFF state, it never indicates when the CAN bus recovers from errors and goes out of a PASSIVE or warning state, and enters a non error state.
If I were to query the socketCAN driver directly (via ioctl calls), would I be able to detect when the CAN bus goes in and out of a warning state (which is less than 127 errors), in and out of a PASSIVE state (greater than 127 errors) or goes BUS OFF (greater than 255 errors)?
I'd like to know if I'd be wasting my time doing this or is there a better way to detect, accurately and in real-time, all conditions of a CAN bus?
I have only a partial solution to that problem.
As you are using socketCAN, the interface is seen as a standard network interface, on which we can query the status.
Based on How to check Ethernet in Linux? (replace "eth0" by "can0"), you can check the link status.
This is not real-time, but can be executed in a periodic thread to check the bus state.
So while this is an old question, I just happened to stumble upon it (while searching for something only mildly related).
SocketCAN provides all the means for detecting error frames OOB.
Assuming your code looks similar to this:
int readFromCan(int socketFd, unsigned char* data, unsigned int* rxId) {
int32_t bytesRead = -1;
struct can_frame canFrame = {0};
bytesRead = (int32_t)read(socketFd, &canFrame, sizeof(can_frame));
if (bytesRead >= 0) {
bytesRead = canFrame.can_dlc;
if (data) {
memcpy(data, canFrame.data, readBytes);
}
if (rxId) {
*rxId = canFrame.can_id; // This will come in handy
}
}
return bytesRead;
}
void doStuffWithMessage() {
int32_t mySocketFd = fooGetSocketFd();
int32_t receiveId = 0;
unsigned char myData[8] = {0};
int32_t dataLength = 0;
if ((dataLength = readFromCan(mySocketFd, myData, &receiveId) == -1) {
// Handle error
return;
}
if (receiveId & CAN_ERR_MASK != 0) {
// Handle error frame
return;
}
// Do stuff with your data
}

libjrtp loosing packets when streaming h264

I have several Axis IP-Cameras and I want to stream their H264 output over RTP to my application.
So far everything is working most of the time, usually with one camera. As soon as I attach more than one cam, I get lots of missing packets on every jrtplib instance I am using, resulting in bad video (artifacts, broken images, etc.).
So, I created a small test setup connecting just one camera and using just one jrtplib instance, with code more or less directly taken from the samples.
using namespace jrtplib;
RTPUDPv4TransmissionParams transparams;
RTPSessionParams sessparams;
RTPSession sess;
sessparams.SetOwnTimestampUnit(1.0 / 90000.0);
sessparams.SetAcceptOwnPackets(true);
transparams.SetPortbase(rtp_port);
auto status = sess.Create(sessparams, &transparams);
checkerror(status);
uint16_t last_sn = 0;
while (1)
{
sess.BeginDataAccess();
// check incoming packets
if (sess.GotoFirstSourceWithData())
{
do
{
RTPPacket *pack;
while ((pack = sess.GetNextPacket()) != NULL)
{
// You can examine the data here
auto sn = pack->GetSequenceNumber();
if (0!=last_sn && sn - last_sn != 1)
{
std::cout << "\tmissing packets: " << (sn - last_sn) << std::endl;
}
std::cout << sn << std::endl;
last_sn = sn;
// we don't longer need the packet, so
// we'll delete it
sess.DeletePacket(pack);
}
} while (sess.GotoNextSourceWithData());
}
sess.EndDataAccess();
status = sess.Poll();
checkerror(status);
Sleep(1);
}
sess.BYEDestroy(RTPTime(10, 0), 0, 0);
Even with this simple test, I get missing packets (missing sequence numbers), I also checked wether the missing sequence number are just delayed, but no.
But when I add transparams.SetRTPReceiveBuffer to a rather high value, like 1048576 bytes, it stops missing packets, at least for this sample.
In my real world code, increasing the receive buffer does not help. I also tried moving the session.Poll() to a separate thread.
Capturing UDP packets using Wireshark shows no dropped packets, so it´s something with libjrtp?
Does anyone have experience with this, or maybe even a suggestion for a another library to use? I am quite stuck at this point...
Thanks for any hints, maybe it is just a small issue and I just don´t see it
Regards

Extract frames from pcap files (tcpdump output) without using Libraries

I need to parse the pcap files and count the packets separately (TCP,UDP,IP). I found a lot of libraries for this like pcap, jnetpcap but I want to do this without using any external libraries.I do not need a code but a just a conceptual explanation.
Question
While parsing pcap files how should I distinguish between the frames(be it TCP,UDP,IP). I tried reading about the format but what I do not understand is how would I come to know about how many bytes should I read for a particular frame and how would i know what type of a frame is it.Because only once I am able to extract the packets separately I will be able to filter out other information.
You'd have to parse each frame separately and have a counter for each value you are trying to count. Assuming the capture you are examining is in pcap/pcapng format you might find libpcap helpful.
To give a quick run of what you might have to do (assuming the lower level is Ethernet without VLAN tags)
uint64_t ip_count, tcp_count, udp_count;
void parse_pkt(uint8_t *data, uint32_t data_len) {
uint8_t *ether_hdr = data;
uint16_t ether_type = ntohs(*(uint16_t *) (data + 12))
if (ether_type != 0x800) {
return;
}
ip_count += 1;
uint8_t *ip_hdr = data + 14;
protocol = ntohs(*(uint16_t *) (ip_hdr + 9))
//protocol is either udp/tcp/sctp...etc
if (protocol == 0x11) {
udp_count++;
} else if (protocol == 0x06) {
tcp_count++;
}
}
// foreach pkt from libpcap_open call parse_pkt with the data and data_len
This code is fragile. Jumping to direct offsets without the proper length and type checks is not a good idea.

Memory Leak when using Pointer to AudioUnitSampleType in Struct - calloc

I am coding an audio app for the iphone where I need to use some C code to deal with the audio files. In short, I have a memory leak that is causing the app to crash after so many files have been loaded. The problem is related to a Struct that I create that holds the audio files when read in. The Struct is created as follows;
typedef struct {
UInt32 frameCount; // the total number of frames in the audio data
UInt32 sampleNumber; // the next audio sample to play
BOOL isStereo; // set to true if there is data audioDataRight member
AudioUnitSampleType *audioDataLeft; // complete left channel of audio data read from file
AudioUnitSampleType *audioDataRight; // complete right channel of audio data read file
} soundStruct, *soundStructPtr;
The Struct is then Initialized in the header like this;
soundStruct phraseSynthStructArray[3];
I then attempt to join two files that have been read into phraseSynthStructArray[phrase1Index] and phraseSynthStructArray[phrase2Index] and put the combined file into phraseSynthStructArray[synthPhraseIndex] like this;
- (BOOL) joinPhrases:(UInt32)phrase1Index phrase2Index:(UInt32)phrase2Index synthPhraseIndex:(UInt32)synthPhraseIndex{
// get the combined frame count
UInt64 totalFramesInFile = inArray[phrase1Index].frameCount + inArray[phrase2Index].frameCount;
//now resize the synthPhrase slot buffer to be the same size as both files combined
// phraseOut is used to hold the combined data prior to it being passed into the soundStructArray slot
free(phraseSynthStructArray[synthPhraseIndex].audioDataLeft);
phraseSynthStructArray[synthPhraseIndex].audioDataLeft = NULL;
phraseSynthStructArray[synthPhraseIndex].frameCount = 0;
phraseSynthStructArray[synthPhraseIndex].frameCount = totalFramesInFile;
phraseSynthStructArray[synthPhraseIndex].audioDataLeft = (AudioUnitSampleType *) calloc(totalFramesInFile, sizeof (AudioUnitSampleType));
for (UInt32 frameNumber = 0; frameNumber < inArray[phrase1Index].frameCount; ++frameNumber) {
phraseSynthStructArray[synthPhraseIndex].audioDataLeft[frameNumber] = phraseSynthStructArray[phrase1Index].audioDataLeft[frameNumber];
}
UInt32 sampleNumber=0;
for (UInt32 frameNumber = phraseSynthStructArray[phrase1Index].frameCount; frameNumber < totalFramesInFile; ++frameNumber) {
phraseSynthStructArray[synthPhraseIndex].audioDataLeft[frameNumber] = phraseSynthStructArray[phrase2Index].audioDataLeft[sampleNumber];
sampleNumber++;
}
return YES;
}
This all works fine and the resulting file is joined and can be used. The isuue I am having is when I allocate the memory here, phraseSynthStructArray[synthPhraseIndex].audioDataLeft = (AudioUnitSampleType *) calloc(totalFramesInFile, sizeof (AudioUnitSampleType)); then next time the method is called, this memory leaks each time and eventually crashes the app. The reason I need to allocate the memory here is because the memory has to be resized to accomodate the joined file which varies in length depending on the size of the input files.
I cannot free the memory after the operation as its needed elsewhere after the method has been called and I have tried to free it before (in joinPhrases method above), but this does not seem to work. I have also tried using realloc to free/reallocate the memory by passing the pointer to the previously allocated memory but this casues a crash stating EXEC_BAD_ACCESS.
I am not a seasoned C programmer and Cannot figure out what I am doing wrong here to cause the leak. I would appreciate some advice to help me track down this issue as I have been banging my head against this for days with no joy. I have read thats its a bad idea to have Pointers in Structs, could this be the root of my problem?
Thanks in advance,
K.
Maybe this helps:
- (BOOL) joinPhrases:(UInt32)phrase1Index phrase2Index:(UInt32)phrase2Index synthPhraseIndex:(UInt32)synthPhraseIndex{
// get the combined frame count
UInt64 totalFramesInFile = inArray[phrase1Index].frameCount + inArray[phrase2Index].frameCount;
. . .
void* old_ptr = phraseSynthStructArray[synthPhraseIndex].audioDataLeft;
phraseSynthStructArray[synthPhraseIndex].audioDataLeft = (AudioUnitSampleType *) calloc(totalFramesInFile, sizeof (AudioUnitSampleType));
if( old_ptr ) free(old_ptr);
. . .
return YES;
}
And make sure that there is no garbage in phraseSynthStructArray[synthPhraseIndex]