DPDK Summit North America presentations are online!
Skip to main content

DPDK Testing Approaches

By July 5, 2021July 15th, 2021Blog

The DPDK (https://www.dpdk.org) project presents a unique set of challenges for fully testing the code, with the largest coverage possible (i.e. the goal of continuous integration and gating). The DPDK testing efforts are overseen by the Community CI group, which meets every other week to review status and discuss open issues with the testing. Before we drive into the details of the testing, the CI group has developed some terminology to describe the different types of testing used within the DPDK CI efforts. First, and probably most obvious, is the simple “compile” testing, or the “does it build” type testing. This is typically referred to as “Compile Testing” by the CI group. Next up is the “Unit Testing,” which refers to running the unit tests provided directly within the DPDK source code tree and maintained by the DPDK developer community. From there, we have two more categories of testing, as “Functional Testing” and “Performance Testing.”  In these cases, the CI community is using the terms to refer to cases where the DPDK stack is operational on the test system (i.e. PMD drivers are bound). Next, we’ll delve a little further into each of the testing types, some of the system requirements, etc.

Compile testing also has the advantage of remaining relatively “uncoupled” from the testing infrastructure. For example, it doesn’t generally depend on specific hardware or PMDs. There are some small exceptions, around architecture, where the compile operation can be a “native” or a “cross compile” target. In the current CI testing, compile jobs are run natively on x86_64 and aarch64 architectures in the community lab, along with an additional cross compile case (x86_64 to aarch64). Another dimension of the compile testing involves the actual OS, libraries, and compiler used for the “test.”  In the community lab, thus far, most of the compile jobs run using GCC versions across various operating systems. The lab aims to maintain coverage of all operating systems officially supported by the current DPDK project releases, along with some additional common OSes, accomplished by running the compile jobs in containers. This greatly simplifies the OS update and maintenance processes, since we can always just update our container images to the latest base image versions, etc. Lastly, for a few of the OSes, the testing is also running on the Clang compiler as well.

Where the Compile Testing leaves of, the Unit Testing picks up. These tests start to run core parts of the compiled DPDK code, flexing operations involving memory allocations and event triggers. Since the unit testing is running parts of the compiled DPDK code, it does start to involve some requirements on the underlying infrastructure. For example, the unit testing expects the system to be prepared with some available hugepages and memory available for the unit testing consumption. In the Community Lab (https://www.iol.unh.edu/hosted-resources/dpdk-performance-lab), the unit testing is still running within the container based infrastructure. The containers run with elevated privileges to support the hugepages access, however, the automation system limits to running only one unit testing container at a time (other compile testing might still be running in parallel). This ensures the unit testing has full access to the required memory and there is only a single “instance” of DPDK running on the system. 

With Functional Testing and Performance Testing, the tool chain switches to use DTS (DPDK Test Suite), which is a Python based tool chain, developed within the community to specifically test DPDK running on bare-metal systems. The community splits the testing based around cases that focus on functionality and performance testing.  Functionality testing verifies items such as configuring a packet filter or frame manipulation and verifying the stack is implementing the function. Performance Testing is focused on verification of the stack against a specific benchmark, such as the maximum packet forwarding rate. In both types of testing, it’s run directly on bare-metal hardware as a combination of system architecture and the actual network interface or PMD. This ensures DPDK code is tested with different interface vendor hardware combinations. The hardware vendors supporting the DPDK testing efforts keep the interface devices and firmware updated to their latest versions, so as the DPDK code evolves, so does the test environment and its hardware / firmware.

Hopefully the above descriptions have helped to explain the testing categories and approaches being taken by the DPDK CI Community efforts to completely test the DPDK project code. In future blogs we get into more detail about how / when the different types of testing are run, i.e. triggered by / for new patches or run periodically on specific code branches. The CI Community meets bi-weekly, on Thursdays at 1pm UTC and all DPDK community members are welcome to join and ask questions or help contribute to the testing efforts. Similarly the community mailing list, ci@dpdk.org, is also a resource as well.