Assign multiple CPU to a NIC Rx queue - ethernet

I have a 4xlarge instance with 16 vCPU but our NIC only supports 8 queues, each of which is pinned to a specific core
I want to pin each queue to 2 vCPU instead – just to validate a hypothesis. Is this possible? I assume there must be some sort of bitmask that pins a queue to a vCPU ? I think I just need to modify that mask?
Or is this sort of thing only achievable via Receive packet steering (RPS)?
Also worth asking, is it even ok to distribute packets to different CPU? Should each tcp flow (sip,sport,dip,dport) be strictly serialized?

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

How exactly do socket receives work at a lower level (eg. socket.recv(1024))?

I've read many stack overflow questions similar to this, but I don't think any of the answers really satisfied my curiosity. I have an example below which I would like to get some clarification.
Suppose the client is blocking on socket.recv(1024):
socket.recv(1024)
print("Received")
Also, suppose I have a server sending 600 bytes to the client. Let us assume that these 600 bytes are broken into 4 small packets (of 150 bytes each) and sent over the network. Now suppose the packets reach the client at different timings with a difference of 0.0001 seconds (eg. one packet arrives at 12.00.0001pm and another packet arrives at 12.00.0002pm, and so on..).
How does socket.recv(1024) decide when to return execution to the program and allow the print() function to execute? Does it return execution immediately after receiving the 1st packet of 150 bytes? Or does it wait for some arbitrary amount of time (eg. 1 second, for which by then all packets would have arrived)? If so, how long is this "arbitrary amount of time"? Who determines it?
Well, that will depend on many things, including the OS and the speed of the network interface. For a 100 gigabit interface, the 100us is "forever," but for a 10 mbit interface, you can't even transmit the packets that fast. So I won't pay too much attention to the exact timing you specified.
Back in the day when TCP was being designed, networks were slow and CPUs were weak. Among the flags in the TCP header is the "Push" flag to signal that the payload should be immediately delivered to the application. So if we hop into the Waybak
machine the answer would have been something like it depends on whether or not the PSH flag is set in the packets. However, there is generally no user space API to control whether or not the flag is set. Generally what would happen is that for a single write that gets broken into several packets, the final packet would have the PSH flag set. So the answer for a slow network and weakling CPU might be that if it was a single write, the application would likely receive the 600 bytes. You might then think that using four separate writes would result in four separate reads of 150 bytes, but after the introduction of Nagle's algorithm the data from the second to fourth writes might well be sent in a single packet unless Nagle's algorithm was disabled with the TCP_NODELAY socket option, since Nagle's algorithm will wait for the ACK of the first packet before sending anything less than a full frame.
If we return from our trip in the Waybak machine to the modern age where 100 Gigabit interfaces and 24 core machines are common, our problems are very different and you will have a hard time finding an explicit check for the PSH flag being set in the Linux kernel. What is driving the design of the receive side is that networks are getting way faster while the packet size/MTU has been largely fixed and CPU speed is flatlining but cores are abundant. Reducing per packet overhead (including hardware interrupts) and distributing the packets efficiently across multiple cores is imperative. At the same time it is imperative to get the data from that 100+ Gigabit firehose up to the application ASAP. One hundred microseconds of data on such a nic is a considerable amount of data to be holding onto for no reason.
I think one of the reasons that there are so many questions of the form "What the heck does receive do?" is that it can be difficult to wrap your head around what is a thoroughly asynchronous process, wheres the send side has a more familiar control flow where it is much easier to trace the flow of packets to the NIC and where we are in full control of when a packet will be sent. On the receive side packets just arrive when they want to.
Let's assume that a TCP connection has been set up and is idle, there is no missing or unacknowledged data, the reader is blocked on recv, and the reader is running a fresh version of the Linux kernel. And then a writer writes 150 bytes to the socket and the 150 bytes gets transmitted in a single packet. On arrival at the NIC, the packet will be copied by DMA into a ring buffer, and, if interrupts are enabled, it will raise a hardware interrupt to let the driver know there is fresh data in the ring buffer. The driver, which desires to return from the hardware interrupt in as few cycles as possible, disables hardware interrupts, starts a soft IRQ poll loop if necessary, and returns from the interrupt. Incoming data from the NIC will now be processed in the poll loop until there is no more data to be read from the NIC, at which point it will re-enable the hardware interrupt. The general purpose of this design is to reduce the hardware interrupt rate from a high speed NIC.
Now here is where things get a little weird, especially if you have been looking at nice clean diagrams of the OSI model where higher levels of the stack fit cleanly on top of each other. Oh no, my friend, the real world is far more complicated than that. That NIC that you might have been thinking of as a straightforward layer 2 device, for example, knows how to direct packets from the same TCP flow to the same CPU/ring buffer. It also knows how to coalesce adjacent TCP packets into larger packets (although this capability is not used by Linux and is instead done in software). If you have ever looked at a network capture and seen a jumbo frame and scratched your head because you sure thought the MTU was 1500, this is because this processing is at such a low level it occurs before netfilter can get its hands on the packet. This packet coalescing is part of a capability known as receive offloading, and in particular lets assume that your NIC/driver has generic receive offload (GRO) enabled (which is not the only possible flavor of receive offloading), the purpose of which is to reduce the per packet overhead from your firehose NIC by reducing the number of packets that flow through the system.
So what happens next is that the poll loop keeps pulling packets off of the ring buffer (as long as more data is coming in) and handing it off to GRO to consolidate if it can, and then it gets handed off to the protocol layer. As best I know, the Linux TCP/IP stack is just trying to get the data up to the application as quickly as it can, so I think your question boils down to "Will GRO do any consolidation on my 4 packets, and are there any knobs I can turn that affect this?"
Well, the first thing you can do is disable any form of receive offloading (e.g. via ethtool), which I think should get you 4 reads of 150 bytes for 4 packets arriving like this in order, but I'm prepared to be told I have overlooked another reason why the Linux TCP/IP stack won't send such data straight to the application if the application is blocked on a read as in your example.
The other knob you have if GRO is enabled is GRO_FLUSH_TIMEOUT which is a per NIC timeout in nanoseconds which can be (and I think defaults to) 0. If it is 0, I think your packets may get consolidated (there are many details here including the value of MAX_GRO_SKBS) if they arrive while the soft IRQ poll loop for the NIC is still active, which in turn depends on many things unrelated to your four packets in your TCP flow. If non-zero, they may get consolidated if they arrive within GRO_FLUSH_TIMEOUT nanoseconds, though to be honest I don't know if this interval could span more than one instantiation of a poll loop for the NIC.
There is a nice writeup on the Linux kernel receive side here which can help guide you through the implementation.
A normal blocking receive on a TCP connection returns as soon as there is at least one byte to return to the caller. If the caller would like to receive more bytes, they can simply call the receive function again.

PCIe Understanding

As this domain is new for me, I have some confusions understanding PCIe.
I was previously working on some protocols like I2c,spi,uart,can and most of these protocols have well defined docs(a max of 300 pages).
In almost all these protocols mentioned, from a software perspective, the application had to just write to a data register and the rest will be taken care by the hardware.
Like for example, in Uart, we just load data into the data register and the data is sent out with a start, parity and stop bit.
I have read a few things about PCIe online and here is the understanding i have so far.
During system boot, the BIOS firmware will figure out the memory space required by the PCIe device by a magic write and read procedure to the BAR in the PCIe device(endpoint).
Once it figures out that, it will allocate an address space for the device in the system memory map(no actual RAM is used in the HOST, memory resides only in the endpoint.The enpoint is memory mapped into the Host).
I see that the PCIe has a few header fields that the BIOS firmware figures out during the bus enumeration phase.
Now,if the Host wants to set a bit in a configuration register located at address 0x10000004(address mapped for the enpoint), the host would do something like(assume just 1 enpoint exists with no branches):
*(volatile uint32 *)0x10000004 |= (1<<Bit_pos);
1.How does the Root complex know where to direct these messages because the BAR is in the enpoint.
Does the RC broadcast to all enpoints and then the enpoints each compare the address to the address programmed in BAR to see if it must accept it or not?(like an acceptence filter in CAN).
Does the RC add all the PCIe header related info(the host just writes to the address)?
If Host writes to 0x10000004, will it write to register at location 0x4 in the endpoint?
How does the host know the enpoint is given an address space starting from 0x10000000?
Is the RC like a router?
The above queries were related to, only if a config reg in the enpoint was needed to be read or written to.
The following queries below are related to data transfer from the host to the enpoint.
1.Suppose the host asks the enpoint to save a particular data present in the dram to a SSD,and since the SSD is conneted to the PCIe slot, will PCIe also perform DMA transfers?
Like, are the special BAR in the enpoint that the host writes with a start address in the Dram that has to be moved to ssd, which in turn triggers the PCIe to perform a DMA tranfer from host to enpoint?
I am trying to understand PCIe relative any other protocols i have worked on so far. This seems a bit new to me.
The RC is generally part of the CPU itself. It serves as a bridge that routes the request of the CPU downstream, and also from the endpoint to the CPU upstream.
PCIe endpoints have Type 0 headers and Bridges/Switches have Type 1 header. Type 1 headers have base(min address) and limit registers(max address). Type 0 headers have BAR registers that are programmed during the enumeration phase.
After the enumeration phase is complete, and all the endpoints have their BARs programmed, the Base and Limit registers in the Type 1 header of the RC and Bridges/Switches are programmed.
Ex: Assume a system that has only 1 endpoint connected directly to the RC with no intermediate Bridges/Switches, whose BAR has the value A00000.
If it requests 4Kb of address space in the CPU(MMIO), the RC would have its Base register as A00000 and Limit register as AFFFFF(It is always 1 MB aligned,though the space requested by the endpoint is much less than 1MB).
If the CPU writes to the register A00004, the RC will look at the base and limit register to find out if the address falls in its range and route the packet downstream to the endpoint.
Endpoints use BAR to find out if they must accept the packets or not.
RC, Bridges and Switches use Base and Limit registers to route packets to the correct downstream port. Mostly, a switch can have multiple downstream ports and each port will have its own Type 1 header,whose Base and Limit register will be programmed with respect to the endpoints connected to its port. This is used for routing the packets.
Data transfer between CPU memory and endpoints is via PCIe Memory Writes. Each PCIe packet has a max payload capacity of 4K. If more than 4K has to be sent to the endpoint, it is via multiple Memory Writes.
Memory Writes are posted transactions(no ACK from the endpoint is needed).

I don't understand the paragraph about multicast

This paragraph if from UNP,
chapter 21.3 page 555
A host running an application that has joined some multicast group whose
corresponding Ethernet address just happens to be one that the interface
receives when it is programmed to receive 01:00:5e:00:01:01 (e.e., the
interface card performs imperfect filtering). This frame will be discarded
either by datalink layer or by the IP layer.
I just don't know which special case is the author talking about. Could you help me explain it clearly?
IN IPV4. A multicast Address (old class D) consists of 4 bits fixed for identifying it as multicast(1110), and the remaining 28 bits to Identify the group.
Since there are only 23 Bits available in a MAC Address (the high order 25 bits are fixed), when you map the lower order 23 bits of the multicast address into the lower order 23 bits of the mac you lose 5 bits of addressing information. So multiple Multicast addresses all have the same MAC address.
for example
237.138.0.1
238.138.0.1
239.138.0.1
all map to MAC address: 01:00:5e:0a:00:01 (There are more, this is just a subset to illustrate)
so if you join group 237.138.0.1, your ethernet card will start sending frames up the stack for that MAC. Since it is an imperfect match (since we discarded those 5 bits), the ethernet card will also send 238.138.0.1 and 239.138.0.1 up the stack as well. But since you are not interested in those frames they will be discard at Layer 2 (data link) or Layer 3 (Network) when they can be matched exactly.
So the special case is that if you have multiple multicast streams that occupy the same lower 23 bits of address space, all hosts on the network segment are going to have to process the packets higher up in the stack and thus do more work to tell if the packet they got is one they are interested in).
normally you just need to make sure when planning your multicast deployments, that you try to avoid overlapping addresses.

What is the fastest (lowest latency) messaging queue solution for sending a message from host A to host B?

Ok folks, NOT counting ethernet speed (Infinitband), kernel bypass or any other fancy stuff, just plain TCP/IP (TCP/UDP over Ethernet) networking. What is the fastest messaging queue implementation that can deliver a message from host A to host B?
Let's assume 10Gigabits ethernet cards connecting both machines with up-to-date architecture and CPUs. What latency in microseconds are we talking here for a 1472 bytes message (MTU - IP/UDP headers)?
As #Sachin described very well, what I am looking for is the messaging queue and the latency number to send a message from A to B like below:
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
if you do not require a broker in between, 0MQ gave us the best performance (you will need to test the numbers on your platform/use case). If using a broker in between, both ActiveMQ & RabbitMQ performed in the same range. Using Redis as a messaging server did not hold up for us.
If not using a messaging server, options such as Netty, J-groups etc might be useful (not sure about your programming language).
You could look into reliable UDP as well if going with straight socket connectivity.
hope it helps.
The lower bound would be at least 2 TCP connections and the routing time inside the messaging queue server (meaning the delays associated with these)
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
Off course, if you build in redundancy, fault tolerance etc, then you are going to be certainly way above this lower bound.
It looks like you are talking about an UDP-based MQ because you mentioned MTU. Well, for UDP-based MQs this time is usually measured as the time required to publish a message and see it back in the message bus. So it is a round-trip time, not a one-way time as you described. This can usually be done in less than 6 microseconds, depending of course on your choice of LAN.