Is it recommended to set the FTDI delay (latency_timer) to 1ms? - ftdi

I've struggled over an article from FTDI about the Latency Timer. Setting a Custom Default Latency Timer Value
... although 1ms is not recommended as this is the same as the USB frame length.
I've setup in several project the latency to 1ms (I want low round robin delays). So is this not recommended? Would be 2ms the "correct" alternative?

I don't know, how old the web page "Setting a Custom Default Latency Timer Value" is. Application note AN232B-04 does not have a special note for 1 ms latency timer.
Note: Latency timer matters only for small amounts of data (see page 6 of the application note) including last fragment of large data.
From my experience: FT2232C/L/D work fine with 1 ms latency, but in case of MPSSE mode SEND IMMEDIATE command gives even faster data transmission.
For UART mode an EVENT CHARACTER (one can set the character to '\n' in text mode or packet delimiter in packet mode) also triggers transmission with no regards on latency timer.

As you can read in the linked documentation 2ms should be correct.
The valid range for the latency timer is 1ms - 255ms

Related

STM32MP157F dmaengine: dmaengine_prep_dma_memcpy works, dmaengine_prep_dma_cyclic does not

​
We have implemented our custom driver that uses DMA to copy a large amount of data from the FMC interface (an FPGA mapped to it) to the RAM using the STM32 mdma engine with 32 dma channels. The FPGA contains a small FIFO we want to copy the data from.
​
For very fast data acquisition the setup time for new DMA transactions becomes critical!
The first implementation used a workqueue to create the next DMA transaction. It could not be done directly from the "dma_completed" atomic context though some necessary IO that has to wait. This lead to pauses between DMA transaction up to 5ms and buffer overflows in the FPGAs FIFO.
As I am copying from a memory mapped region to RAM, I am using dmaengine_prep_dma_memcpy.
I implemented a number of improvements that reduced the pause betweens DMAs:
I am fusing dma mapped pages so that less dma transaction entries have to be created so less dma engine programming is necessary.
I am preparing the next dma pages upfront. So the next DMA transaction can be directly started from the "dma_completed" routine.
I am using a second dma channel and toggle between them when dma_completed is called. This allows to setup a second DMA with the first one still running. Though linux dma api allows this with one channel, the MDMA engine does not and ignores the added transactions.
Usually the pause is now lower than 1ms. But there a spikes were the FIFO nearly overflowing.
Finally I tried to use dmaengine_prep_dma_cyclic. This would be perfect. A continuously running DMA with no need for a setup time between interrupts.
​But this does not work. Or better: I do not get it to work...
The transaction created with dmaengine_prep_dma_cyclic does not want to start!
I am getting a new dma_cookie and any status request to the channel returns "DMA_IN_PROGRESS". It never completes and the completetion callback is also never called.
Though dmaengine_prep_dma_memcpy works fine...
I think this is because of the difference between software vs hardware triggered DMA transactions.
Looking into stm32-mdma.c is see that dmaengine_prep_dma_memcpy has its own setup routine whereas dmaengine_prep_dma_cyclic use stm32_mdma_set_xfer_param() that always configures a HW request.
My very big big questions:
Is there a way to use dmaengine_prep_dma_cyclic for a MEMORY to MEMORY DMA transaction (software triggered)? This would be the perfect solution to my performance problem...
​Are we missing some signals to connect the FPGA to the SOC? My FPGA programming collegue suspects some missing TSEL (trigger selection) setting. He suspects dmaengine_prep_dma_cyclic will work then.
If a minimum driver module source code example would help in getting better answers, I can provide one in short time. Please note that this is highly hardware specific. Other SOCs than STM32MP157F may have different behaviour.
Thanks for every feedback!
Bye Gunther
References:
https://wiki.st.com/stm32mpu/wiki/Dmaengine_overview
https://github.com/STMicroelectronics/linux/blob/v5.15-stm32mp/drivers/dma/stm32-mdma.c

How exactly do socket receives work at a lower level (eg. socket.recv(1024))?

I've read many stack overflow questions similar to this, but I don't think any of the answers really satisfied my curiosity. I have an example below which I would like to get some clarification.
Suppose the client is blocking on socket.recv(1024):
socket.recv(1024)
print("Received")
Also, suppose I have a server sending 600 bytes to the client. Let us assume that these 600 bytes are broken into 4 small packets (of 150 bytes each) and sent over the network. Now suppose the packets reach the client at different timings with a difference of 0.0001 seconds (eg. one packet arrives at 12.00.0001pm and another packet arrives at 12.00.0002pm, and so on..).
How does socket.recv(1024) decide when to return execution to the program and allow the print() function to execute? Does it return execution immediately after receiving the 1st packet of 150 bytes? Or does it wait for some arbitrary amount of time (eg. 1 second, for which by then all packets would have arrived)? If so, how long is this "arbitrary amount of time"? Who determines it?
Well, that will depend on many things, including the OS and the speed of the network interface. For a 100 gigabit interface, the 100us is "forever," but for a 10 mbit interface, you can't even transmit the packets that fast. So I won't pay too much attention to the exact timing you specified.
Back in the day when TCP was being designed, networks were slow and CPUs were weak. Among the flags in the TCP header is the "Push" flag to signal that the payload should be immediately delivered to the application. So if we hop into the Waybak
machine the answer would have been something like it depends on whether or not the PSH flag is set in the packets. However, there is generally no user space API to control whether or not the flag is set. Generally what would happen is that for a single write that gets broken into several packets, the final packet would have the PSH flag set. So the answer for a slow network and weakling CPU might be that if it was a single write, the application would likely receive the 600 bytes. You might then think that using four separate writes would result in four separate reads of 150 bytes, but after the introduction of Nagle's algorithm the data from the second to fourth writes might well be sent in a single packet unless Nagle's algorithm was disabled with the TCP_NODELAY socket option, since Nagle's algorithm will wait for the ACK of the first packet before sending anything less than a full frame.
If we return from our trip in the Waybak machine to the modern age where 100 Gigabit interfaces and 24 core machines are common, our problems are very different and you will have a hard time finding an explicit check for the PSH flag being set in the Linux kernel. What is driving the design of the receive side is that networks are getting way faster while the packet size/MTU has been largely fixed and CPU speed is flatlining but cores are abundant. Reducing per packet overhead (including hardware interrupts) and distributing the packets efficiently across multiple cores is imperative. At the same time it is imperative to get the data from that 100+ Gigabit firehose up to the application ASAP. One hundred microseconds of data on such a nic is a considerable amount of data to be holding onto for no reason.
I think one of the reasons that there are so many questions of the form "What the heck does receive do?" is that it can be difficult to wrap your head around what is a thoroughly asynchronous process, wheres the send side has a more familiar control flow where it is much easier to trace the flow of packets to the NIC and where we are in full control of when a packet will be sent. On the receive side packets just arrive when they want to.
Let's assume that a TCP connection has been set up and is idle, there is no missing or unacknowledged data, the reader is blocked on recv, and the reader is running a fresh version of the Linux kernel. And then a writer writes 150 bytes to the socket and the 150 bytes gets transmitted in a single packet. On arrival at the NIC, the packet will be copied by DMA into a ring buffer, and, if interrupts are enabled, it will raise a hardware interrupt to let the driver know there is fresh data in the ring buffer. The driver, which desires to return from the hardware interrupt in as few cycles as possible, disables hardware interrupts, starts a soft IRQ poll loop if necessary, and returns from the interrupt. Incoming data from the NIC will now be processed in the poll loop until there is no more data to be read from the NIC, at which point it will re-enable the hardware interrupt. The general purpose of this design is to reduce the hardware interrupt rate from a high speed NIC.
Now here is where things get a little weird, especially if you have been looking at nice clean diagrams of the OSI model where higher levels of the stack fit cleanly on top of each other. Oh no, my friend, the real world is far more complicated than that. That NIC that you might have been thinking of as a straightforward layer 2 device, for example, knows how to direct packets from the same TCP flow to the same CPU/ring buffer. It also knows how to coalesce adjacent TCP packets into larger packets (although this capability is not used by Linux and is instead done in software). If you have ever looked at a network capture and seen a jumbo frame and scratched your head because you sure thought the MTU was 1500, this is because this processing is at such a low level it occurs before netfilter can get its hands on the packet. This packet coalescing is part of a capability known as receive offloading, and in particular lets assume that your NIC/driver has generic receive offload (GRO) enabled (which is not the only possible flavor of receive offloading), the purpose of which is to reduce the per packet overhead from your firehose NIC by reducing the number of packets that flow through the system.
So what happens next is that the poll loop keeps pulling packets off of the ring buffer (as long as more data is coming in) and handing it off to GRO to consolidate if it can, and then it gets handed off to the protocol layer. As best I know, the Linux TCP/IP stack is just trying to get the data up to the application as quickly as it can, so I think your question boils down to "Will GRO do any consolidation on my 4 packets, and are there any knobs I can turn that affect this?"
Well, the first thing you can do is disable any form of receive offloading (e.g. via ethtool), which I think should get you 4 reads of 150 bytes for 4 packets arriving like this in order, but I'm prepared to be told I have overlooked another reason why the Linux TCP/IP stack won't send such data straight to the application if the application is blocked on a read as in your example.
The other knob you have if GRO is enabled is GRO_FLUSH_TIMEOUT which is a per NIC timeout in nanoseconds which can be (and I think defaults to) 0. If it is 0, I think your packets may get consolidated (there are many details here including the value of MAX_GRO_SKBS) if they arrive while the soft IRQ poll loop for the NIC is still active, which in turn depends on many things unrelated to your four packets in your TCP flow. If non-zero, they may get consolidated if they arrive within GRO_FLUSH_TIMEOUT nanoseconds, though to be honest I don't know if this interval could span more than one instantiation of a poll loop for the NIC.
There is a nice writeup on the Linux kernel receive side here which can help guide you through the implementation.
A normal blocking receive on a TCP connection returns as soon as there is at least one byte to return to the caller. If the caller would like to receive more bytes, they can simply call the receive function again.

STM32F302 Adc with DMA for different size and channel

I'm using STM32F302 QFN32 and unfortunately, it has only one ADC module. One channel must get around 500 samples in one period and it must be sync with and PWM (thinking using a timer and this i/o will be toggled in callback, because while reading its ADC channel, I must know the i/o whether high or low, so that according to this value, will decide value). Furthermore, there are 4 more channels which must be read.(More samples doesn't need there like before, 8 or 16 samples will be enough.) However, it has only one ADC module. Consequently, Can I do this? If yes, how? Thank you.
ST ADC have two conversion modes. Regular and Injected.
Regular mode is like all ADC's have. You start it, either by software or trigger (timer/gpio) and it does one or a sequence of conversions. The result is written to a common register, that the DMA takes care of.
Injected mode is a high priority preemption conversion. Once you start an injected conversion sequence by software or trigger. The ADC injects the conversion between the regular conversions. As a higher priority one. The result is stored in one of the injected result channel for the interrupt.
Only regular mode supports DMA. See AN4195 for more info.
I suggest you use a timer to trigger a regular sequence for your fast channel, with a circular DMA setup to move the data. And use another timer to trigger the injected sequence. There is a maximum of 4 injected channels, so you are in luck!
Obviously, you can do this the other way around. Have fast injections and slow regular. But you'll need another timer synchronized to the injected start trigger to get the DMA to move the data.
That is, if your samplerate does not allow immediate processing. Otherwise you can just use the ISR.

Why is UDP packet reception seemingly optimized mid-execution?

I'm running a client server configuration over Ethernet and measuring packet latency at both ends. The client (windows) is sending packets every 5 ms (confirmed with wire shark) as it should. Yet, the server (embedded linux) only receives packets at 5 ms intervals for a few seconds, at which point it stops for 300 ms. After this break the latency is only 20 us. After another period of about a few seconds it takes another break for 300 ms. This repeats indefinitely (300ms break, 20 us packet latency burst). It seems as if the server program is being optimized mid-execution to read IO in shorter bursts. Why is this happening?
Disclaimer: I haven't posted the code, as the client and server are small subsets of more complex applications, however, I am willing to factor it out if an obvious answer doesn't present itself.
This is UDP so there is no handshake or any flow control mechanism. Those 300 ms must be because of work the server is doing in the processing of the UDP messages received. During those 300 ms the server has surely lost around 60 messages that were not read from client.
You might probably want to check the server does not take more than 5 ms in processing each message if it uses one thread to process. If the server uses multi-threading to process the messages and the processing takes some time, even if it takes 1 ms, you might be in a situation where at some point all threads are competing for resources and they don't finish in time to read the next message. For the problem you are describing I would bet the server is multithreaded and you have that problem. I cannot assure that 100% for lack of info though. But in any case, you want to check the time it takes to process messages because you might be dealing with real-time requirements.
I spaced out the measurements to 1 in every 1000 packets and now it is behaving itself. I was using printf every 5ms which must've eventually filled the printf tx queue entirely. This then delayed the execution for 300ms. Once printf caught its breath, the program had a queue full of incoming packets and thus was seemingly receiving packets every 20 us.

interrupt scheduling in .Net Micro Framework

.NET MF doesn't support preemptive interrupts. Once a process is completed or the 20 msec timer assigned by the scheduler times out, the interrupts can be processed.
Is there any way to change this 20 msec to a shorter time, or change the scheduler process and make it like an real-time scheduler?
Alternatively, assuming the 20 msec delay can be tolerated to begin the interrupt processing, but the exact time of interrupt occurrence at is a must-to-know factor. I think with time argument in an InterruptPort event handler, one can work backwards and determine the time at which the iteruppt got queued.
However, how about if serial port is used, and the time of data arrival to the port must be known? Is there any way that we can determine at what time data arrived to the serial port, or its corresponding interrupt was queued by the framework? Thanks.
The .NET micro framework was designed to be used on memory constrained micro controllers without an MMU (Memory Management Unit). The .NET Micro framework, even with all its multiple threading model and TCP/IP stack support, does not run on a classical real time operating system structure.
If you need strict real time capability, please check out Windows CE 6.0 R3 and Windows Compact 7 which are hard real time operating systems and have native 32 bit real time support.
Hope my answer helps you.