I'm reading Linux Device Driver 3rd. In chapter 6: poll and select, the author says:
"*unsigned int (*poll) (struct file *filp, poll_table wait);
The driver method is called whenever the user-space program performs a poll, select,
or epoll system call involving a file descriptor associated with the driver."
So if I have hundreds of fd in my epoll call, every time I reach epoll(), this poll in the driver will be called for hundreds of times?
Thanks.
Yes, the kernel will loop through all the file descriptors and call the poll() method. It needs to sample the current state of all the file descriptors in order to report them back to the caller in userspace.
Note, that this is true for select and poll, I'm not familiar with epoll, but if it uses the same file op, then it should apply here as well.
Related
I wrote a C++ program to create a socket and bind on this socket to receive ICMP/UDP packets. The code I wrote as following:
while(true){
recvfrom(sockId, rePack, sizeof(rePack), 0, (struct sockaddr *)&raddr, (socklen_t *)&len);
processPakcet(recv_size);
}
So, I used a endless while loop to receive messages continually, But I worried about the following two questions:
1, How long the message would be kept in the receiver queue or say in NIC queue?
I worried about that if it takes too long to process the first message, then I might miss the second message. so how fast should I read after read.
2, How to prevent reading the duplicated messages?
i.e, does the receiver queue knows me, when my thread read the first message done, would the queue automatically give me the second one? or say, when I read the first message, then the first message would be deleted by the queue and no one could receive it again.
Additionally, I think the while(true) module is not good, anyone could give me a good suggestion please. (I heard something like polling module).
First, you should always check the return value from recvfrom. It's unlikely the recvfrom will fail, but if it does (for example, if you later implement signal handling, it might fail with EINTR) you will be processing undefined data. Also, of course, the return value tells you the size of the packet you received.
For question 1, the actual answer is operating system-dependent. However, most operating systems will buffer some number of packets for you. The OS interrupt handler that handles the incoming packet will never be copying it directly into your application level buffer, so it will always go into an OS buffer first. The OS has previously noted your interest in it (by virtue of creating the socket and binding it you expressed interest), so it will then place a pointer to the buffer onto a queue associated with your socket.
A different part of the OS code will then (after the interrupt handler has completed) copy the data from the OS buffer into your application memory, free the OS buffer, and return to your program from the recvfrom system call. If additional packets come in, either before or after you have started processing the first one, they'll be placed on the queue too.
That queue is not infinite of course. It's likely that you can configure how many packets (or how much buffer space) can be reserved, either at a system-wide level (think sysctl-type settings in linux), or at the individual socket level (setsockopt / ioctl).
If, when you call recvfrom, there are already queued packets on the socket, the system call handler will not block your process, instead it will simply copy from the OS buffer of the next queued packet into your buffer, release the OS buffer, and return immediately. As long as you can process incoming packets roughly as fast as they arrive or faster, you should not lose any. (However, note that if another system is generating packets at a very high rate, it's likely that the OS memory reserved will be exhausted at some point, after which the OS will simply discard packets that exceed its resource reservation.)
For question 2, you will receive no duplicate messages (unless something upstream of your machine is actually duplicating them). Once a queued message is copied into your buffer, it's released before returning to you. That message is gone forever.
(Note that it's possible that some other process has also created a socket expressing interest in the same packets. That process would also get a copy of the packet data, which is typically handled internal to the operating system by reference counting rather than by actually duplicating the OS buffers, although that detail is invisible to applications. In any case, once all interested processes have received the packet, it will be discarded.)
There's really nothing at all wrong with a while (true) loop; it's a very common control structure for long-running server-type programs. If your program has nothing else it needs to be doing in the meantime, while true allowing it to block in recvfrom is the simplest and hence clearest way to implement it.
(You could use a select(2) or poll(2) call to wait. This allows you to handle waiting for any one of multiple file descriptors at the same time, or to periodically "time out" and go do something else, say, but again if you have nothing else you might need to be doing in the meantime, that is introducing needless complication.)
I am new to STM32 and freertos. I need to write a program to send and receive data from a module via UART port. I have to send(Transmit) a data to that module(for eg. M66). Then I would return to do some other tasks. once the M66 send a response to that, my seial-port-receive-function(HAL_UART_Receive_IT) has to be invoked and receive that response. How can I achieve this?
The way HAL_UART_Receive_IT works is that you configure it to receive specified amount of data into given buffer. You give it your buffer to which it'll read received data and number of bytes you want to receive. It then starts receiving data. Once exactly this amount of data is received, a callback function HAL_UART_RxCpltCallback gets called (from IRQ) where you can do whatever you want with this data, e.g. add it to some kind of queue for later processing in the task context.
If I was to express my experiences related to working with HAL's UART module is that it's not the greatest one for generic use where you don't know the amount of data you expect to receive in advance. In the case of M66 modem you mention, this will happen all the time.
To solve this you have two choices:
Simply don't use HAL functions at all in case of UART, other than the initialization functions. Implement your own UART interrupt handler (most of the code can be copied from handler in HAL) where upon receiving data you place received bytes in a receive byte queue handled in your RTOS task. In this task you implement protocol parsing. This is the approach I use personally.
If you really want to use HAL but also work with a module that sends varying amount of data, call HAL_UART_Receive_IT and specify that you want to receive 1 byte each time. This will work, but will be (potentially much) slower than the first approach. Assuming you'll later want to implement some tcp/ip communication (you mentioned M66 GPRS module) you probably don't want to do it this way.
You should try the following way.
Enable UARTX Rx interrupt in NVIC.
Set Interrupt priority.
Unmask Interrupt request in EXTI.
Then use USARTX Interrupt Handler Function Define in you Vector.
Whenever the data is received from USARTX this function get automatically called and you can copy data from USARTX Receive Data Register.
I would rather suggest another approach. You probably want to archive higher speeds (lets say 921600 bods) and the interrupt way is fat to slow for it.
You need to implement the DMA transmition with the data end detection features. Run your USART in the DMA mode in the circular mode. You will have two events to serve. The first one is the DMA end of thransmition interrupt (then you copy the data from the current tail pointer to the end of the buffer to avoid data override) and USART IDLE interrupt - this will detect the end of the receive.
I am in the situation where I would like a C program to block on a set of file descriptors until all files are ready. This differs from the traditional select(), poll(), and epoll() system calls that only block until any file descriptor is ready. Is there a standard function that will block until all files are ready? Or perhaps there are some other clever tricks?
Obviously, I could call select() in a loop until all file descriptors are ready, but I don't want to incur the overheads of context switches, preemptions, migrations, etc.. I'd rather that the select()'ing task just sleep until all files are ready.
It's not thread safe in case there are other threads operating on some of the same file descriptors at the same time (but you probably shouldn't be doing that anyway) but you can try this:
Initialize the poll set to all of the file descriptors you're interested in.
poll() for the current set of file descriptors
When poll() returns, scan the revents and find all of the file descriptors that are ready. Remove them from the poll set.
If there are any file descriptors still in the set, go back to step 2.
poll one last time with the full set of file descriptors to make sure they are all still ready.
If some are not ready anymore, go back to step 1.
success
It still may involve many poll() calls, but at least it doesn't busy-wait. I don't think there exists a more efficient way.
I have a question about a char driver.
A char driver using GPIO pins to communicate with a hardware device, including interrupt interfacing.
The driver's "release ()" method is missing.
What order should function elements put in?
A. Delete cdev and unregister device
B. Free GPIO Resources
C. freeing IRQ resource
D. Unregistrer major / minor number
In which order in the "release()" method?
Thanks
As per my understanding correct order looks like C, B, A and D :-). Explanation: Need to free the IRQ since gpio pin (used as an interrupt pin), IRQ number is got from passing this gpio pin to gpio_to_irq and after this only you can go ahead in freeing up the gpio stuff. After that deletion of cdev come into picture to which file operations, device node info(dev_t, 32bit unsigned integer. In which 12 bit is used for major no and remaining 20 bit is used for minor no) and minor number info (minor no start value and how many minor no's asked for) are associated. At-last go ahead and unregister the driver.
Actually, some of these things may be done in the release() function, and some of these things must be done in the module_exit() function. It all depends on what you do where.
First, some terminology: module_init() is called when the module is loaded with insmod. The opposite function is module_exit() which is called when the module is unloaded with rmmod. open() is called when a user process tries to open a device file with the open() system call, and the opposite function release() is called when the process that opened the device file (as well as all processes that were branched from that original process) call the close() system call on the file descriptor.
The module_exit() function is the opposite of the module_init() function. Assuming you are using the CDev API, in the module init function you must register a major / minor numbers (D) first with alloc_chrdev_region() or register_chrdev_region() before adding the cdev to the system with cdev_init() and then cdev_add().
It stands to reason that when module_exit() is called, you should undo what you did in the reverse order; i.e. remove the cdev first with cdev_del() and then unregister the major/minor number with unregister_chrdev_region().
At some point in the module_init() function you may request the GPIO resources with request_mem_region() & ioremap(), and then the IRQ resource with request_irq(). On the other hand you may request the GPIO resources and the IRQ resource in the open() function instead.
If you request these resources in the module_init() function, then you should release these resources in the module_exit() function. However, if you do it in open() then you should keep track of how many processes have the device file open, and when all of them have released the device file, release the resources in the release() function.
Again, whatever order you requested the resources in, in general you should release the resources in the opposite order. I will say however, that almost always it is incorrect to release the memory resources (in your case the GPIO resources) before releasing the IRQ resource, since the IRQ will most likely want to talk to the hardware, either in the top half or the bottom half handler.
In summary, the order depends on how you implemented the driver to request the resources in the first place, however, if you implement your drivers like I do, then in general, perform C then B in release(), and perform A then D in module_exit().
I have a socket bound to a NIC that I am using to capture packets in a pcap_loop.
I have a separate process running that eventually does a "read" on that same device, but only after a unix local pipe is ready to be read. Is it correct to say that the read() on the device from the 2nd process will read everything that's ready, not just one packet at a time, even though my other process is set up to use pcap_loop to read a packet at a time?
I have a socket bound to a NIC that I am using to capture packets in a pcap_loop.
You say "socket", so I'm guessing that this is Linux (it could also be IRIX, but that's a lot less likely, and the answer is the same in either case; other OSes don't use sockets in libpcap, the native capture mechanism on those OSes uses mechanisms other than sockets).
I have a separate process running that eventually does a "read" on that same device, but only after a unix local pipe is ready to be read. Is it correct to say that the read() on the device from the 2nd process will read everything that's ready, not just one packet at a time,
No. A PF_PACKET socket returns one packet at a time from a read().
There is, by the way, no guarantee that reading from the socket with a read and handling the same socket in libpcap at the same time will work. Libpcap might be using the memory-mapped mechanism to get the packets; unless you've seen documentation on how the memory-mapped mechanism works with read()s done elsewhere, or have read the Linux kernel code enough to figure out how it works, you might not want to assume it'll work the way you want.
If, however, this is FreeBSD, as suggested (but not stated) by the tag, then what libpcap is using is a BPF device, *NOT* a socket. A read() will give you an entire bufferful of packets, and the read()s done by libpcap will give libpcap an entire bufferful of packets, even if it happens to call your callback once per packet. The same issues of read() vs. memory-mapped access could occur, but the memory-mapped BPF in later versions of FreeBSD isn't, by default, used by libpcap.