Multi Queue
What is Multi Queue? |
---|
It is an acceleration feature that lets you assign more than one packet queue and CPU to an interface.
When most of the traffic is accelerated by the SecureXL, the CPU load from the CoreXL SND instances can be very high, while the CPU load from the CoreXL FW instances can be very low. This is an inefficient utilization of CPU capacity.
By default, the number of CPU cores allocated to CoreXL SND instances is limited by the number of network interfaces that handle the traffic. Because each interface has one traffic queue, only one CPU core can handle each traffic queue at a time. This means that each CoreXL SND instance can use only one CPU core at a time for each network interface.
Check Point Multi-Queue lets you configure more than one traffic queue for each network interface. For each interface, you can use more than one CPU core (that runs CoreXL SND) for traffic acceleration. This balances the load efficiently between the CPU cores that run the CoreXL SND instances and the CPU cores that run CoreXL FW instances.
Important – Multi-Queue applies only if SecureXL is enabled.
Multi-Queue Requirements and Limitations |
---|
Tip 1
- Multi-Queue is not supported on computers with one CPU core.
- Network interfaces must use the driver that supports Multi-Queue. Only network cards that use the igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue.
- You can configure a maximum of five interfaces with Multi-Queue.
- You must reboot the Security Gateway after all changes in the Multi-Queue configuration.
- For best performance, it is not recommended to assign both SND and a CoreXL FW instance to the same CPU core.
- Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
- Multi-Queue is relevant only if SecureXL and CoreXL is enabled.
- Do not change the IRQ affinity of queues manually. Changing the IRQ affinity of the queues manually can adversely affect performance.
- You cannot use the “sim affinity” or the “fw ctl affinity” commands to change and query the IRQ affinity of the Multi-Queue interfaces.
- The number of queues is limited by the number of CPU cores and the type of interface driver:
Network card driver |
Speed |
Maximal number of RX queues |
igb |
1 Gb |
4 |
ixgbe |
10 Gb |
16 |
i40e |
40 Gb |
14 |
mlx5_core |
40 Gb |
10 |
- The maximum RX queues limit dictates the largest number of SND/IRQ instances that can empty packet buffers for an individual interface using that driver that has Multi-Queue enabled.
- Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances in the following scenario (sk114625)
-
- MQ is enabled for on-board interfaces (e.g., Mgmt, Sync)
- the number of active RX queues was set to either 3 or 4 (with cpmq set rx_num igb <number> command)
- This problem was fixed in:
- Check Point R80.10
- Jumbo Hotfix Accumulator for R77.30 – since Take_198
- The number of traffic queues is limited by the number of CPU cores and the type of network interface card driver.
The on-board interfaces on these appliances use the igb driver, which supports up to 4 RX queues.
However, the I211 controller on these on-board interfaces supports only up to 2 RX queues.
When Multi-Queue will not help |
---|
Tip 2
- When most of the processing is done in CoreXL – either in the Medium path, or in the Firewall path (Slow path).
- All current CoreXL FW instances are highly loaded, so there are no CPU cores that can be reassigned to SecureXL.
- When IPS, or other deep inspection Software Blades are heavily used.
- When all network interface cards are processing the same amount of traffic.
- When all CPU cores that are currently used by SecureXL are congested.
- When trying to increase traffic session rate.
- When there is not enough diversity of traffic flows. In the extreme case of a single flow, for example, traffic will be handled only by a single CPU core. (Clarification: The more traffic is passing to/from different ports/IP addresses, the more you benefit from Multi-Queue. If there is a single traffic flow from a single Client to a single Server, then Multi-Queue will not help.)
Multi-Queue is recommended |
---|
- Load on CPU cores that run as SND is high (idle < 20%).
- Load on CPU cores that run CoreXL FW instances is low (idle > 50%).
- There are no CPU cores left to be assigned to the SND by changing interface affinity.
Multi-Queue support on Appliance vs. Open Server |
---|
Gateway type |
Network interfaces that support the Multi-Queue |
Check Point Appliance |
|
Open Server |
Network cards that use igb (1Gb), ixgbe (10Gb), i40e (40Gb), or mlx5_core (40Gb) drivers support the Multi-Queue. |
Multi-Queue support on Open Server (Intel Network Cards) |
---|
Tip 3
The following list shows an overview with all Intel cards from Check Point HCL for open server from 11/21/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
Intel network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
1 |
82598EB |
8086:25e7 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
82598EB |
8086:10ec |
ixgbe |
PCI-E |
10G Copper |
yes |
|
10 Gigabit XF family (Dual and Single Port models, SR and LR) |
2 |
82598 |
8086:10c6 |
Ixgbe |
PCI-E |
10G Fiber |
yes |
2 |
X540 |
8086:1528 |
ixgbe |
PCI-E |
100/1G/10G |
yes |
|
2 |
82580 |
– |
Igb |
PCI-E |
10/100/1G |
yes |
|
2 |
82580 |
– |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
X520-SR2, X520-SR1, X520-LR1, X520-DA2 |
2 |
X520 |
– |
ixgbe |
PCI-E |
10G Fiber |
yes |
4 |
82575GB |
8086:10d6 |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
|
– |
igb |
PCI-E |
1G Copper |
yes |
|
1 |
82597EX |
8086:109e |
Ixgb |
PCI-X |
10G Copper |
no |
|
1 |
82597EX |
8086:1b48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
1 |
82597EX |
8086:1a48 |
Ixgb |
PCI-X |
10G Fiber |
no |
|
2 |
82546GB |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82576 |
8086:10e6 |
Igb ? |
PCI-E |
1G Fiber |
yes ? |
|
2 |
82576 |
|
igb |
PCI-E |
1G Copper |
yes |
|
4 |
82576 |
8086:10e8 |
Igb |
PCI-E |
10/100/1G Copper |
yes |
|
4 |
82546 |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
1 |
82546 ? 82545 ? |
– |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82546 ? 82545 ? |
– |
E1000 |
PCI-X |
1G Fiber |
no |
|
2 |
82546 ? 82545 ? |
– |
E1000 |
PCI-X |
1G Fiber |
no |
|
4 |
82546 ? 82545 ? |
– |
E1000 |
PCI-X |
1G Fiber |
no |
|
1 |
82571 ? |
8086:107e |
E1000 |
PCI-E |
1G Fiber |
no |
|
2 |
82571 ? |
8086:115f |
E1000 |
PCI-E |
1G Fiber |
no |
|
4 |
82571 ? |
8086:10a5 |
E1000 |
PCI-E |
1G Fiber |
no |
|
1 |
82571 |
8086:1082 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
82571 |
8086:108a |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10a4 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
82571 |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
82544 |
|
E1000 |
PCI-X |
1G Fiber |
no |
For all “?” I could not clarify the points exactly.
Multi-Queue support on Open Server (HP and IBM Network Cards) |
---|
Tip 4
The following list shows an overview with all HP cards from Check Point HCL for open server from 11/22/2018.
The list is cross-referenced to the Intel drivers. I do not assume any liability for the correctness of the information. These lists should only be used to help you find the right drivers. It is not an official document of Check Point!
So please always read the official documents of Check Point.
HP network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
4 |
BCM5719 |
14e4:1657 |
tg3 |
PCI-E |
1G Copper |
no |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
4 |
Intel I350 |
8086:1521 |
igb |
PCI-E |
1G Copper |
yes |
|
2 |
Intel 82599EB |
0200: 8086:10fb |
ixgbe |
PCI-E |
10G Fiber |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X710 |
8086:1572 |
i40e |
PCI-E |
10G Copper |
yes |
|
2 |
Intel X540-AT2 |
8086:1528 |
ixgbe |
PCI-E |
10G Copper |
yes |
|
1 |
Intel 82572GI |
8086:10b9 |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
BCM5721 KFB |
14e4:1659 |
tg3 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
BCM5715S |
14e4:1679 |
tg3 |
PCI-E |
1G Copper |
no |
|
NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class Blade System |
2 |
BCM5715S |
|
tg3 |
PCI-E |
1G Copper |
no |
4 |
Intel 82546GB |
8086:10b5 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
|
2 |
Intel 82571EB |
8086:105e |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel 82571EB |
8086:10bc |
E1000 |
PCI-E |
10/100/1G Copper |
no |
|
4 |
Intel |
8086:150e |
igb |
PCI-E |
10/100/1G Copper |
yes |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
1G Copper |
no |
|
2 |
BCM5708S |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
1 |
Broadcom 5708 |
14e4:16ac |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
BCM5706 |
– |
bnx2 |
PCI-E |
10/100/1G Copper |
no |
|
2 |
NX3031 |
4040:0100 |
??? |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0700 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Emulex OneConn |
19a2:0710 |
be2net |
PCI-E |
10G Fiber |
no |
|
2 |
Intel 82546EB |
8086:1010 |
E1000 |
PCI-X |
10/100/1G Copper |
no |
For all “?” I could not clarify the points exactly.
IBM network card |
Ports |
Chipset |
PCI ID |
Driver |
PCI |
Speed |
MQ |
Broadcom 10Gb 4-Port Ethernet Expansion Card (CFFh) for IBM BladeCenter |
4 |
BCM57710 |
|
bnx2x |
PCI-E |
10G Fiber |
no |
4 |
I350 |
|
igb |
PCI-E |
1G Copper |
yes |
|
1 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
??? (1) |
|
??? |
PCI-X |
10/100/1G Copper |
??? |
|
2 |
82571GB |
|
E1000 |
PCI-E |
10/100/1G Copper |
no |
(1) These network cards can’t even be found at Goggle.
Notes to Intel igb and ixgbe driver |
---|
I used the LKDDb Database to identify the drivers. LKDDb is an attempt to build a comprensive database of hardware and protocols know by Linux kernels. The driver database includes numeric identifiers of hardware, the kernel configuration menu needed to build the driver and the driver filename. The database is build automagically from kernel sources, so it is very easy to have always the database updated. This was the basis of the cross-reverence between Check Point HCL and Intel drivers.
Link to LKDDb web database:
https://cateee.net/lkddb/web-lkddb/
Link to LKDDb database driver:
How to recognize the driver |
---|
With the ethtool you can display the version and type of the driver. For example for the interface eth0.
# ethtool -i eth0
driver: igb
version: 2.1.0-k2
firmware-version: 3.2-9
bus-info: 0000:02:00.0
Active RX multi queues – formula |
---|
By default, Security Gateway calculates the number of active RX queues based on this formula:
RX queues = [Total Number of CPU cores] – [Number of CoreXL FW instances]
Configure |
---|
Here I would refer to the following links:
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
References |
---|
Best Practices – Security Gateway Performance
Multi-Queue does not work on 3200 / 5000 / 15000 / 23000 appliances when it is enabled for on-board interfaces
Performance Tuning R80.10 Administratio Guide
Performance Tuning R80.20 Administration Guide
Intel:
Download Intel® Network Adapter Virtual Function Driver for Intel® 10 Gigabit Ethernet Network Connections
Download Network Adapter Driver for Gigabit PCI Based Network Connections for Linux*
Download Intel® Network Adapter Driver for 82575/6, 82580, I350, and I210/211-Based Gigabit Network Connections for Linu…
LKDDb (Linux Kernel Driver Database):
https://cateee.net/lkddb/web-lkddb/
Copyright by Heiko Ankenbrand 1994-2019