On every received ethernet frame driver allocates new sk_buff buffer which is innefficient. Is it possible to allocate a bunch of sk_buffs during initialization and reuse those sk_buffs after high levels stops using it?
Related
I pretty much understand how the CAN protocol works -- when two nodes attempt to use the network at the same time, the lower id can frame gets priority and the other node detects this and halts.
This seems to get abstracted away when using socketcan - we simply write and read like we would any file descriptor. I may be misunderstanding something but I've gone through most of the docs (http://lxr.free-electrons.com/source/Documentation/networking/can.txt) and I don't think it's described unambiguously.
Does write() block until our frame is the lowest id frame, or does socketcan buffer the frame until the network is ready? If so, is the user notified when this occurs or do we use the loopback for this?
write does not block for channel contention. It could block because of the same reasons a TCP socket write would (very unlikely).
The CAN peripheral will receive a frame to be transmitted from the kernel and perform the Medium Access Control Protocol (MAC protocol) to send it over the wire. SocketCAN knows nothing about this layer of the protocol.
Where the frame is buffered is peripheral/driver dependent: the chain kernel-driver-peripheral behaves as 3 chained FIFOs with their own control flow mechanisms, but usually, it is the driver that buffers (if it is needed) the most since the peripheral has less memory available.
It is possible to subscribe for errors in the CAN stack protocol (signaled by the so called "error frames") by providing certain flags using the SocketCAN interface (see 4.1.2 in your link): this is the way to get error information at application layer.
Of course you can check for a correctly transmitted frame by checking the loopback interface, but it is overkill, the error reporting mechanism described above should be used instead and it is easier to use.
Is there a way to map a descriptor created by socket() to a memory buffer?
The reason why I am looking for this is because I want to make an existing application to read from the memory buffer I created instead of its associated TCP buffer. I shouldn't modify the application, so I want to map a fd returned by the application to a buffer I created.
I found a similar question:
Can descriptors for sockets be converted to File Pointers?
But I don't know if fdopen() can be used for my purpose because fdopen() takes only two arguments (fd and mode) and I don't know how to re-associate the fd to a memory I create with malloc().
Is there a way to map a descriptor created by socket() to a memory buffer?
No. It doesn't make sense. A mapped file makes sense because of the virtual memory system. A mapped socket doesn't.
I want to map a fd returned by the application to a buffer I created.
You will have to write code to read from the socket into your buffer.
I have post here ,a function that i use , to get the accelerator fft .
Setup the accelerator framework for fft on the iPhone
It is working great.
The thing is, that i use it in real time, so for each new audio buffer i call this function with the new buffer.
I get a memory warning because of these lines (probably)
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
questions :
do i have another way, but to malloc them again and again(dont forget i have to feed it with a new buffer many times a second )
how exactly do i free them? (code lines)
can it caused by the fact that the fft is heavy to the system ?
Any way to get rid of this warning will help me a lot .
Thanks a lot.
These things should be done once, at the start of your program:
Allocate memory for buffers, using code like float *buffer = malloc(NumberOfElements * sizeof *buffer);.
Create an FFT setup, using code like FFTSetup setup = vDSP_create_fftsetup(log2n, FFT_RADIX2);.
Also test the return values. If malloc or vDSP_create_fftsetup returns 0, write an error message and exit the program or take other exception behavior.
These things should be done once, at the end of your program:
Destroy the FFT setup, using code like vDSP_destroy_fftsetup(setup);.
Release the memory for the buffers, using code like free(buffer);.
In the middle of your program, while you are processing samples, the code should use the existing buffers and setup. So the variables pointing to the buffers and the setup must be visible to that code. You can either pass them in as parameters (perhaps grouped together in a struct) or make them global (which should be only a temporary solution for small programs).
Your program should be arranged so that it is never necessary to allocate memory or create an FFT setup while samples are being processed.
All memory that is allocated should be freed eventually.
If you are malloc'ing and never freeing, you will run out of memory. Make sure to 'free' your memory using free().
*Note: free() doesn't actually erase any memory. It simply tells the system that we're done with the memory and it's available for other allocations.
// Example:
// allocating memory
int *intpointer;
intpointer = malloc(sizeof(int));
// ... do stuff...
// 'Freeing' it when you are done
free(intpointer);
I read following link
Linux Device Driver Program, where the program starts?
as per this all system calls operate independent to each other.
1> Then how to share common memory between different system call & interrupt handler.
but there should be some way to allocate memory ... so that they have common access to a block of memory.
2> Also which pointer to allocate the memory? so that it is accessiable by all ?
Is there some example which uses driver private data ?
For my app, I need to play music on background when user navigate inside it.
So, starting from MixerHost, I developed an audio mixer which is able to play 8 tracks simultaneously. Nevertheless, It consumes to much memory because the 8 tracks files are entirely loaded in 8 buffers.
To limit the memory consumption, I load only a small chunk of data at the beginning, and I feed with new data in the callback like that
result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList );
It works quite well, but sometimes the playback is shortly paused. I know the origin of the problem: making FS access in the callback.
But is there another solution to limit memory consumption ?
The way this is typically handled is with a shared ring buffer. The ring buffer acts like a shock absorber between the real-time render thread and the slow disk accesses. Create a new thread that does nothing but read audio from the file and stores it in the ring buffer. Then, in your render callback just read from the ring buffer.
Apple has provided an implementation of a ring buffer suitable for use with Audio Units, called CARingBuffer. It's available in /Developer/Extras/CoreAudio/PublicUtility/CARingBuffer.