How to use pktgen to generate 100gbe traffic with dpdk?

Issue

Specs:

Ubuntu 20.04

Intel E810 NIC 2x100Gbe

DPDK 21.11.0

pktgen 22.04.1

Supermicro 414GS-TNR
Dual AMD EPYC Processors

This is my first time using DPDK and pktgen and I am having trouble getting things setup. The documentation is lacking on what exactly to do once you have things setup. I am working on an application that reads 100gbe udp packets and writes the payload to file at line rate. At this point I am simply trying to get pktgen to work on the machine that I will be using to generate traffic. Eventually this traffic will be received via a Mellanox ConnectX-5 nic on a different machine.

I followed the directions for setting up pktgen and it does not seem to be doing anything. Here is the cfg file:

description = 'A Pktgen default simple configuration'

# Setup configuration
setup = {
    'exec': (
    'sudo', '-E'
        ),

    'devices': (
        'a1:00.0'
        ),
    # UIO module type, igb_uio, vfio-pci or uio_pci_generic
    'uio': 'vfio-pci'
    }

# Run command and options
run = {
    'exec': ('sudo', '-E'),

    # Application name and use app_path to help locate the app
    'app_name': 'pktgen',

    # using (sdk) or (target) for specific variables
    # add (app_name) of the application
    # Each path is tested for the application
    'app_path': (
        './usr/local/bin/%(app_name)s',
        '/usr/local/bin/%(app_name)s'
        ),

    'cores': '2,24-26,27-29',
    'nrank': '4',
    'proc': 'auto',
    'log': '7',
    'prefix': 'pg',

    'blocklist': (
        #'03:00.0', '05:00.0',
        #'81:00.0', '84:00.0'
        ),
    'allowlist': (
        'a1:00.0,safe-mode-support=1',
        ),

    'opts': (
        '-v',
        '-T',
        '-P',
        '-j',
        ),
    'map': (
        '[24:26].0',
        ),

    'theme': 'themes/black-yellow.theme'
    }

And here is the output:

>>> sdk 'None', target 'None'
<module 'cfg' from 'cfg/default-100G.cfg'>
   Trying ./usr/local/bin/pktgen
sudo -E ./usr/local/bin/pktgen -l 2,24-26,27-29 -n 4 --proc-type auto --log-level 7 --file-prefix pg -a a1:00.0,safe-mode-support=1 -- -v -T -P -j -m [24:26].0 -f themes/black-yellow.theme
[sudo] password for cogrfserver:



Copyright(c) <2010-2021>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Auto-detected process type: PRIMARY
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/pg/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:a1:00.0 (socket 1)
ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package (single VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
**** Jumbo Frames of 9618 enabled.

*** Copyright(c) <2010-2021>, Intel Corporation. All rights reserved.
*** Pktgen  created by: Keith Wiles -- >>> Powered by DPDK <<<

>>> Packet Burst 128, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048
 Port: Name         IfIndex Alias        NUMA  PCI
    0: net_ice         0                   1   8086:1592/a1:00.0


=== port to lcore mapping table (# lcores 7) ===
   lcore:    2       3       4       5       6       7       8       9      10      11      12      13      14      15      16      17      18      19      20      21      22      23      24      25      26      27      28      29      Total
port   0: ( D: T) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) ( 0: 0) ( 0: 0) = ( 1: 1)
Total   : ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) ( 0: 0) ( 0: 0)
  Display and Timer on lcore 2, rx:tx counts per port/lcore

>>>> Configuring 1 ports, MBUF Size 9746, MBUF Cache Size 2048
Lcore:
   24, RX-Only
                RX_cnt( 1): (pid= 0:qid= 0)
   26, TX-Only
                TX_cnt( 1): (pid= 0:qid= 0)

Port :
    0, nb_lcores  2, private 0x555d4c620700, lcores: 24 26


Initialize Port 0 -- TxQ 1, RxQ 1
** Device Info (a1:00.0, if_index:0, flags 00000066) **
   min_rx_bufsize : 1024  max_rx_pktlen     : 9728  hash_key_size :   52
   max_rx_queues  :   64  max_tx_queues     :   64  max_vfs       :    0
   max_mac_addrs  :   64  max_hash_mac_addrs:    0  max_vmdq_pools:    0
   vmdq_queue_base:    0  vmdq_queue_num    :    0  vmdq_pool_base:    0
   nb_rx_queues   :    0  nb_tx_queues      :    0  speed_capa    : 000057f4

   flow_type_rss_offloads:0000000000007ffc  reta_size             :  512
   rx_offload_capa       :VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM QINQ_STRIP OUTER_IPV4_CKSUM VLAN_FILTER VLAN_EXTEND SCATTER TIMESTAMP KEEP_CRC
   tx_offload_capa       :VLAN_INSERT IPV4_CKSUM UDP_CKSUM TCP_CKSUM SCTP_CKSUM TCP_TSO OUTER_IPV4_CKSUM QINQ_INSERT MULTI_SEGS MBUF_FAST_FREE OUTER_UDP_CKSUM
   rx_queue_offload_capa :0000000000000000  tx_queue_offload_capa :0000000000010000
   dev_capa              :0000000000000000

  RX Conf:
     pthresh        :    8 hthresh          :    8 wthresh        :    0
     Free Thresh    :   32 Drop Enable      :    0 Deferred Start :    0
     offloads       :0000000000000000
  TX Conf:
     pthresh        :   32 hthresh          :    0 wthresh        :    0
     Free Thresh    :   32 RS Thresh        :   32 Deferred Start :    0
     offloads       :0000000000000000
  Rx: descriptor Limits
     nb_max         : 4096  nb_min          :   64  nb_align      :   32
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Tx: descriptor Limits
     nb_max         : 4096  nb_min          :   64  nb_align      :   32
     nb_seg_max     :    0  nb_mtu_seg_max  :    0
  Rx: Port Config
     burst_size     :   32  ring_size       : 1024  nb_queues     :    1
  Tx: Port Config
     burst_size     :   32  ring_size       : 1024  nb_queues     :    1
  Switch Info: (null)
     domain_id      :65535  port_id         :    0

    Create: Default RX  0:0  - Memory used (MBUFs 16384 x (size 9746 + Hdr 128)) + 192 = 157985 KB, headroom 128
      Set RX queue stats mapping pid 0, q 0, lcore 24


    Create: Default TX  0:0  - Memory used (MBUFs 16384 x (size 9746 + Hdr 128)) + 192 = 157985 KB, headroom 128
    Create: Range TX    0:0  - Memory used (MBUFs 16384 x (size 9746 + Hdr 128)) + 192 = 157985 KB, headroom 128
    Create: Rate TX     0:0  - Memory used (MBUFs 16384 x (size 9746 + Hdr 128)) + 192 = 157985 KB, headroom 128
    Create: Sequence TX 0:0  - Memory used (MBUFs 16384 x (size 9746 + Hdr 128)) + 192 = 157985 KB, headroom 128
    Create: Special TX  0:0  - Memory used (MBUFs    64 x (size 9746 + Hdr 128)) + 192 =    618 KB, headroom 128

                                                                       Port memory used = 790543 KB
Src MAC b4:96:91:b0:0d:f8
 <Promiscuous mode Enabled>
                                                                      Total memory used = 790543 KB
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).


=== Display processing on lcore 2
WARNING: Nothing to do on lcore 25: exiting
WARNING: Nothing to do on lcore 27: exiting
WARNING: Nothing to do on lcore 28: exiting
WARNING: Nothing to do on lcore 29: exiting
- Ports 0-0 of 1   <Main Page>  Copyright(c) <2010-2021>, Intel Corporation
  Flags:Port        : P------Sngl       :0
Link State          :           <--Down-->     ---Total Rate---
Pkts/s Rx           :                    0                    0
       Tx           :                    0                    0
MBits/s Rx/Tx       :                  0/0                  0/0
Pkts/s Rx Max       :                    0                    0
       Tx Max       :                    0                    0
Broadcast           :                    0
Multicast           :                    0
Sizes 64            :                    0
      65-127        :                    0
      128-255       :                    0
      256-511       :                    0
      512-1023      :                    0
      1024-1518     :                    0
Runts/Jumbos        :                  0/0
ARP/ICMP Pkts       :                  0/0
Errors Rx/Tx        :                  0/0
Total Rx Pkts       :                    0
      Tx Pkts       :                    0
      Rx/Tx MBs     :                  0/0
TCP Flags           :               .A....
TCP Seq/Ack         :  305419896/305419920
Pattern Type        :              abcd...
Tx Count/% Rate     :        Forever /100%
Pkt Size/Tx Burst   :            64 /  128
TTL/Port Src/Dest   :       64/ 1234/ 5678
Pkt Type:VLAN ID    :      IPv4 / TCP:0001
-- Pktgen 22.04.1 (D:    8086:1592/a1:00.0by DPDK  (pid:12889) ----------------
** Version: DPDK 21.11.0, Command Line Interface without timers
Pktgen:/>
Executing 'themes/black-yellow.theme'
t         theme default white white off
Pktgen:/> theme top.spinner cyan none bold
Pktgen:/> theme top.ports green none bold
Pktgen:/> theme top.page white none bold
Pktgen:/> theme top.copyright yellow none off
Pktgen:/> theme top.poweredby blue none bold
Pktgen:/> theme sep.dash blue none off
Pktgen:/> theme sep.text white none off
Pktgen:/> theme stats.port.label blue none bold
Pktgen:/> theme stats.port.flags blue none bold
Pktgen:/> theme stats.port.data blue none off
Pktgen:/> theme stats.port.status green none off
Pktgen:/> theme stats.port.linklbl green none bold
Pktgen:/> theme stats.port.link green none off
Pktgen:/> theme stats.port.ratelbl white none bold
Pktgen:/> theme stats.port.rate white none off
Pktgen:/> theme stats.port.sizelbl cyan none bold
Pktgen:/> theme stats.port.sizes cyan none off
Pktgen:/> theme stats.port.errlbl red none bold
Pktgen:/> theme stats.port.errors red none off
Pktgen:/> theme stats.port.totlbl blue none bold
Pktgen:/> theme stats.port.totals blue none off
Pktgen:/> theme stats.dyn.label blue none bold
Pktgen:/> theme stats.dyn.values green none off
Pktgen:/> theme stats.stat.label magenta none off
Pktgen:/> theme stats.stat.values white none off
Pktgen:/> theme stats.total.label red none bold
Pktgen:/> theme stats.total.data blue none bold
Pktgen:/> theme stats.colon blue none bold
Pktgen:/> theme stats.rate.count blue none bold
Pktgen:/> theme stats.bdf blue none off
Pktgen:/> theme stats.mac green none off
Pktgen:/> theme stats.ip cyan none off
Pktgen:/> theme pktgen.prompt green none off
Pktgen:/> cls
| Ports 0-0 of 1   <Main Page>  Copyright(c) <2010-2021>, Intel Corporation
  Flags:Port        : P------Sngl       :0
Link State          :       <UP-100000-FD>     ---Total Rate---
Pkts/s Rx           :                    0                    0
       Tx           :                    0                    0
MBits/s Rx/Tx       :                  0/0                  0/0
Pkts/s Rx Max       :                    7                    7
       Tx Max       :                    0                    0
Broadcast           :                    0
Multicast           :                   60
Sizes 64            :                    0
      65-127        :                  178
      128-255       :                   29
      256-511       :                   60
      512-1023      :                    0
      1024-1518     :                    0
Runts/Jumbos        :                  0/0
ARP/ICMP Pkts       :                  0/0
Errors Rx/Tx        :                  0/0
Total Rx Pkts       :                  267
      Tx Pkts       :                    0
      Rx/Tx MBs     :                  0/0
TCP Flags           :               .A....
TCP Seq/Ack         :  305419896/305419920
Pattern Type        :              abcd...
Tx Count/% Rate     :        Forever /100%
Pkt Size/Tx Burst   :            64 /  128
TTL/Port Src/Dest   :       64/ 1234/ 5678
Pkt Type:VLAN ID    :      IPv4 / TCP:0001

Questions are:

  1. What else am I supposed to do to start generating 100gbe traffic?
  2. I notice that once the driver is switched to the vfio-pci driver for dpdk, the interface name no longer shows up in ifconfig or the dpdk driver tool, why?

Thanks.

Solution

There are multiple assumptions made which are incorrect or incomplete which leads to main queries such as

  1. What else should I do to start generating 100gbe traffic?
  2. the driver is switched to the vfio-pci driver for dpdk, the interface name no longer shows up in ifconfig
  3. the pktgen prompt comes up but I am unsure what to do there. Do I assign mac addresses? Ip addresses?
  4. Nowhere in the documentation is there an example.

So addressing the issues one by one below

  • unable to use Pktgen:
  1. DPDK Pktgen documentation can be found here.
  2. the easiest way to start the packet generation with default values is by simply start all.
  3. for creating a range of packets either refer to stackoverflow-1 or pktgen runtime options
  4. Since MLX NIC is mentioned as next NIC in use, also refer to link-1 & like-2 in StackOverflow.

note: almost all the information required to get started is covered in Pktgen Documentation and the official site. So I highly recommend spending some time there. Hence I disagree with the statement Nowhere in the documentation is there an example.

  • the interface name no longer shows up in ifconfig
  1. Whole intent of DPDK libraries was to bypass the kernel NIC driver and Stack layer processing by binding the device with UIO (Userspace IO).
  2. Broadly NIC are classified as Physical NIC (with PCIe id) and a few Virtual NIC (with netdev) in DPDK
  3. Intel E810 is a physical NIC which can bind with DPDK in userspace with igb_uio or vfio for both PF and VF (refer).
  4. DPDK Documentation refer to the section on device configuration and others for a complete overview.

Note:

  • Most of vendor NIC (including Intel) allow complete configuration of the device by exposing the PCie config space to userspace. All device configuration and RX-TX happens directly without any intermediate or kernel driver intervention. So binding the device to userspace will remove the device from kernel netdev instance
  • if the intention is to use the NIC still under kernel check out AF_PACKET and PCAP PMD.
  • Mellanox NIC CX-5, CX-6 and CX-7 uses more of port|switch representation mode. The Hardware resources are managed by Kernel Driver hence each port an associated ethdev netlink instance will be present. There is no bind to dpdk uio taking place instead access to DMA buffers are redirect to userspace via port representators

Accessing the DPDK linux getting started guide and quick start page, will help in resolving the queries too.

  • Do I assign mac addresses? Ip addresses

Simple answer is no for both DPDK application and DPDK pktgen. Once the kernel and network stack is bypassed there is no longer concept of local IP termination or packet forwarding. The Pktgen prompt has already a default set of values with which it transmit packet with desired mac, vlan, IP, tcp|udp payload.

Answered By – Vipin Varghese

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published