Posted on behalf of Narendra K, Dell Linux Engineering
With the advent of 10Gbps network adapters, the CPU core is increasingly becoming a bottle-neck in processing network traffic. But using multiple cores to drive I/O through a single, shared queue requires software locking mechanisms that might result in reduced efficiency. Multiqueue provides a way to overcome this inefficiency by distributing the network packets processing in multiple queues of the network adapter across multiple cores.
The Linux kernel contains support for network interface cards with multiple transmit and receive queues. Multiple queues result in improved network performance by efficient utilization of multiple CPU cores common in most modern servers by distributing packet processing across the CPU cores. RHEL 6, launched recently contains support for this functionality.
In Linux, Network queue is usually a ring buffer implemented in software with the help of the hardware where packets are queued for further processing. Vendors of network adapters might have different implementations of queues; the data sheet of the adapter/chip would contain the details.
Multiqueue allows network devices to route incoming/outgoing packets to multiple queues based on various criteria. The Network Adapter can route packets to one of a set of receive queues based on certain filter mechanisms (like Receive Side Scaling filters, Layer 3/4 filters, etc). This combined with MSI-X capability on the host system helps distribute packet processing across multiple CPU cores. On the transmit side, the queue selection is usually based on packet flow information like source and destination address and source and destination port.
Dell PowerEdge Servers support a variety of Network Adapters with multiqueue capability. Consult the network adapter’s documentation to confirm if the adapter has support for multiqueue.
For example, on a system with an Intel Gigabit ET Dual Port Server Adapter , which has multiple queues, dmesg would show that this network controller supports eight receive and eight transmit queues:
And the /proc/interrupts entry for the interface would look like:
The first column shows the MSI-X interrupt vectors being used by the network adapter, columns two to nine show the number of instances each of these interrupts occurred on that CPU core, column 10 shows the interrupt type and column 11 shows the TxRx <Queue Number> . It can be seen that each of the TxRx queues are mapped to one interrupt vector. The packets being received could be steered to any of the queues based on the filter mechanisms and being handled by one of the 8 CPU cores.
The behavior would be similar for a 10G network adapter.
Stay tuned for more blogs on Multiqieue use case scenarios.
Further Reading:
David Miller’s presentation on multiqueue. cell spy
mobile spy
spy text messages
Sources......
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment