DPDK Summit Montreal Sep 24-25
Skip to main content
Category

Blog

DPDK blog posts

Inside DPDK 23.07: A Comprehensive Overview of the New Major Release

By Blog

We’re excited to share with you the news that a new major release, DPDK 23.07, is now available for download. Here, we will take a detailed look at what this release brings to the table, the contributors behind it, and what lies ahead for the DPDK community.

Unpacking DPDK 23.07

Despite the overall number of commits being modest at 1028, this release signifies a considerable amount of work as indicated by the number of changed lines: 1554 files were altered with 157260 insertions and 58411 deletions. This endeavor was not a small feat as it involved 178 authors who each put their expertise and dedication into making this release possible.

Community Engagement and Contributions

Our doors are always open for more hands, and we encourage anyone willing to contribute in various stages of the process. There is a wide array of areas you can help in, such as testing, review, or merge tasks. Every contribution, no matter how small, brings us one step closer to our common goal.

It’s worth noting that there is no plan to start a maintenance branch for 23.07. However, this new version is ABI-compatible with 22.11 and 23.03, which means it will interact seamlessly with the binaries built with these older versions.

Notable New Features in DPDK 23.07

The new release brings forth several enhancements and features, promising an improved experience for users. 

Some of these additions include the AMD CDX bus, PCI MMIO read/write capabilities. 

It includes new flow patterns (Tx queue, Infiniband BTH), new flow actions (push/remove IPv6 extension) and also introduces an indirect flow rule list.

As well as a vhost interrupt callback, more ShangMi crypto algorithms, and a PDCP library.

We have also seen the removal of the LiquidIO driver, a move that aligns with the DPDK’s commitment to providing a refined and optimized toolkit. 

Finally, this release includes a DMA device performance test application and there are some progress with the new DTS: it is able to run a basic UDP test now, improving utilities for developers.

More details about these changes can be found in the release notes here.

A Big Welcome to New Contributors

The DPDK community is growing, and 23.07 has seen contributions from 37 new contributors including authors, reviewers, and testers. We extend our warm welcome to all of them and appreciate their invaluable contributions.

DPDK 23.07 was not a solo effort. The release has seen contributions from multiple organizations, the leading contributors being:

Intel (252 commits), Marvell (225 commits), and NVIDIA (127 commits), among others. This community effort underlines the collaborative spirit of the DPDK project.

Reviewer Acknowledgements

A special thank you goes to all those who took up the often under appreciated task of reviewing other peoples work. Your efforts have been instrumental in maintaining the high quality of DPDK code. The top non-PMD reviewers, based on Reviewed-by and Acked-by tags, include Ferruh Yigit, Akhil Goyal, David Marchand, Chenbo Xia, and Bruce Richardson.

Looking Ahead

However, there is always room for improvement. There are currently more than 300 open bugs in our Bugzilla, and the number of comments in half-done work (like TODO, FIXME) is on the rise. This highlights a need for increased efforts in code cleanup.

Looking to the future, the next version, 23.11, is slated for release in November. We encourage developers to submit new features for this version within the next two weeks. You can find the next milestones here.

Mark Your Calendars

Lastly, don’t forget to register for the upcoming DPDK Summit in September. It’s a great opportunity to connect with the DPDK community, learn about the latest developments, and share your experiences.

A big thank you to everyone for your continuous contributions to DPDK. See you in Dublin for the DPDK Summit!

Get the 23.07 release here.

The DPDK Community Lab: Reflecting on 2022 and Looking Ahead

By Blog

Hosted by The University of New Hampshire InterOperability Lab (UNH-IOL), the DPDK Community Lab exists to provide prompt and reliable continuous integration(CI) testing for all new patches submitted to DPDK by its community of developers. Every time a patch is submitted to DPDK Patchwork, it is automatically applied to the main DPDK branch, or the appropriate next- branch, and run across our test beds. Now entering its 6th year in operation, the Community Lab has progressed beyond its initial goal of simply providing performance testing results. Test coverage is also provided for functional, unit, compile, and ABI testing across a wide array of environments. This has brought greater development stability and feedback to the DPDK developer community, and also to DPDK gold project members who are eligible to host their resources (NICs, CPUs, etc.) at the UNH-IOL lab. This is a great value provided not only to DPDK at large, but also to the participating vendors who would otherwise have to host this testing in house at a significant cost or develop their products without the reliable and timely feedback that CI testing provides.

In 2022, the Community Lab again expanded on its existing operations. Members of the Community Lab submitted patches to DPDK and DTS in order to increase testing and demonstrate DPDK functions. This year, we also broadened the reach of our testing on a hardware coverage level and test case level. With new Arm servers, updated NICs from our participating members, and our furthered use of containerization, we have greatly diversified the set of environments under testing. Specific examples of developments and happenings in the Community Lab this year include, but are not limited to, the following:

  • Submitted patches to DPDK in order to bring its FIPS compliance and FIPS sample application up to date with new processes from NIST, and set up CI infrastructure to test and demonstrate these capabilities across crypto-devs and algorithms. 
  • Expanded hardware testing coverage by introducing newer, higher capacity NICs, as well as expanded our test coverage on ARM based systems.
  • Expanded DPDK branch coverage, including the new LTS “staging” branches. By incorporating these branches into the CI testing process, we can provide greater stability for LTS backporting. 
  • Participated in the DPDK Test Suite (DTS) working group to develop the new DTS and incorporate it into the DPDK main repository. Our contributions included collecting feedback from the community to determine requirements for DTS, setting new formatting standards and development best practices to provide high quality code, and developing a testing approach that provides a consistent environment between users by containerizing DTS.
  • Developed and upstreamed a container building system to the DPDK-CI repository.
  • Updated our Linux distribution test coverage list based on community feedback.
  • Worked with Microsoft to bring DPDK unit testing to Windows.
  • Made the necessary changes in our internal infrastructure and in DPDK to enable OpenSSL unit testing in the lab.

We’re proud of the work we’ve done at the DPDK Community Lab, and we’re looking forward to the 2023 goals. We will work to maintain the test coverage we’ve built up over years and push to demonstrate and test more features according to the desires of the community and our participating vendors. On the hardware side, we aim to introduce new NICs for testing, and incorporate other types of hardware accelerators where possible. Ideally, we will also partner with more DPDK gold member companies to provide our CI testing on their platforms and equipment. Another goal is to expand our traffic generator options so we can provide testing on a broader range of environments. On the software side, the Community Lab will continue making contributions toward the development of new DTS, implementing an email based retesting framework, and increasing our contributions and involvement with the DPDK continuous integration repo. We want to strengthen CI testing within our own lab, but also across the entire community. We’re going to continue engaging with the DPDK CI community, and as always, anyone interested in our work or CI testing in general can find us bi-weekly at the DPDK Community CI meetings. We look forward to writing another blog post in one year’s time to review and reflect on the progress made in the lab during the 2023 calendar year!

DPDK 22.11 Release is Now Available!

By Blog

A new major release, DPDK 22.11,  is now available: https://fast.dpdk.org/rel/dpdk-22.11.tar.xz

It was a comfortable release cycle, with:

  •         1387 commits from 193 authors
  •         1902 files changed, 137224 insertions(+), 61256 deletions(-)

The branch 22.11 should be supported for at least two years, maybe more, making it recommended for system integration and deployment.

The new major ABI version is 23.

The next releases, 23.03 and 23.07,  will be ABI-compatible with 22.11.

Below are some new features:

  •         LoongArch build
  •         Intel uncore frequency control
  •         mempool optimizations
  •         new mbuf layout for IOVA-as-VA build
  •         multiple mbuf pools per Rx queue
  •         Rx buffer split based on protocol
  •         hardware congestion management
  •         hairpin memory configuration
  •         proactive NIC error handling
  •         Rx/Tx descriptor dump
  •         flow API extensions
  •         GVE (Google Virtual Ethernet)
  •         IDPF (Intel Infrastructure DataPath Function)
  •         UADK supporting HiSilicon crypto
  •         MACsec processing offload
  •         ShangMi crypto algorithms
  •         baseband FFT operations
  •         eventdev Tx queue start/stop
  •         eventdev crypto vectorization
  •         NitroSketch membership
  •         DTS introduction in DPDK repository

More details in the release notes: https://doc.dpdk.org/guides/rel_notes/release_22_11.html

There are 74 new contributors (including authors, reviewers and testers).

Welcome to:  Abdullah Sevincer, Abhishek Maheshwari, Alan Liu, Aleksandr Miloshenko, Alex Vesker, Alexander Chernavin, Allen Hubbe, Amit Prakash Shukla, Anatolii Gerasymenko, Arkadiusz Kubalewski, Arshdeep Kaur, Benjamin Le Berre, Bhagyada Modali, David MacDougal, Dawid Zielinski, Dexia Li, Dukai Yuan, Erez Shitrit, Fei Qin, Frank Du, Gal Shalom, Grzegorz Siwik, Hamdan Igbaria, Hamza Khan, Henning Schild, Huang Wei, Huzaifa Rahman, James Hershaw, Jeremy Spewock, Jin Ling, Joey Xing, Jun Qiu, Kaiwen Deng, Karen Sornek, Ke Xu, Kevin O’Sullivan, Lei Cai, Lei Ji, Leszek Zygo, Long Wu, Lukasz Czapnik, Lukasz Kupczak, Mah Yock Gen, Mandal Purna Chandra, Mao YingMing, Marcin Szycik, Michael Savisko, Min Zhou, Mingjin Ye, Mingshan Zhang, Mário Kuka, Piotr Gardocki, Qingmin Liu, R Mohamed Shah, Roman Storozhenko, Sathesh Edara, Sergey Temerkhanov, Shiqi Liu, Stephen Coleman, Steven Zou, Sunil Uttarwar, Sunyang Wu, Sylvia Grundwürmer, Tadhg Kearney, Taekyung Kim, Taripin Samuel, Tomasz Jonak, Tomasz Zawadzki, Tsotne Chakhvadze, Usman Tanveer, Wiktor Pilarczyk, Yaqi Tang, Yi Li and Zhangfei Gao.

Below is the number of commits per employer:

A big thank to all courageous people who took on the non rewarding task of reviewing others’ work! Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:

  •  (53)     Akhil Goyal 
  •  (45)     Andrew Rybchenko 
  •  (36)    Morten Brørup 
  •  (34)     Niklas Söderlund 
  •  (34)     Bruce Richardson 
  •  (33)     David Marchand 
  •  (31)     Ori Kam
  •  (25)     Maxime Coquelin 
  •  (21)     Jerin Jacob 
  •  (20)     Chengwen Feng 

The next version, DPDK 23.03, will be available in March of 2023. The new features for 23.03 can be submitted during the next 4 weeks: http://core.dpdk.org/roadmap#dates

Please share your roadmap.

Thanks,  everyone!

DPDK 22.07 Release is Now Available

By Blog

The latest DPDK release, 22.07, is now available: https://fast.dpdk.org/rel/dpdk-22.07.tar.xz

As is atypical, this Summer release is quite small:

  • 1021 commits from 177 authors
  •  1149 files changed, 77256 insertions(+), 26288 deletions(-)

There are no plans to start a maintenance branch for 22.07.
This version is ABI-compatible with 21.11 and 22.03.

New features include:

  •   initial RISC-V support
  •   sequence lock
  •   protocol-based metering
  •   Rx threshold event
  •   SFP telemetry
  •   async vhost improvements
  •   vhost library statistics
  •   vmxnet3 versions 5 & 6
  •  ECDH crypto
  •  eventdev runtime attributes
  •  DMA device telemetry
  •  SWX pipeline improvements
  •  integration as Meson subproject

More details in the release notes: https://doc.dpdk.org/guides/rel_notes/release_22_03.html

Note: GCC 12 may emit some warnings, some fixes are missing.

Welcome! There are 44 new contributors (including authors, reviewers and testers): 

Abdullah Ömer Yamaç, Abhimanyu Saini, Bassam Zaid AlKilani, Damodharam Ammepalli, Deepak Khandelwal, Diana Wang, Don Wallwork, Duncan Bellamy, Ferdinand Thiessen, Fidaullah Noonari, Frank Zhao, Hanumanth Pothula, Heinrich Schuchardt, Hernan Vargas, Jakub Wysocki, Jin Liu, Jiri Slaby, Ke Zhang, Kent Wires, Marcin Danilewicz, Michael Rossberg, Michal Mazurek, Mike Pattrick, Mingxia Liu, Niklas Söderlund, Omar Awaysa, Peng Zhang, Quentin Armitage, Richard Donkin, Romain Delhomel, Sam Grove, Spike Du, Subendu Santra, Tianhao Chai, Veerasenareddy Burru, Walter Heymans, Weiyuan Li, Wenjing Qiao, Xiangjun Meng, Xu Ting, Yinjun Zhang, Yong Xu, Zhichao Zeng and Zhipeng Lu.

Below is the percentage of commits per employer:

A big thank to all courageous people who took on the non rewarding task of reviewing others’ work. Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:

         54     Akhil Goyal 
         52     Fan Zhang 
         42     Jerin Jacob 
         35     Chenbo Xia 
         34     Maxime Coquelin 
         28     Andrew Rybchenko 
         23     Matan Azrad 
         21     Bruce Richardson 
         20     Qi Zhang 
         20     Ferruh Yigit 
         19     Morten Brørup 
         19     Anoob Joseph 

The Next version, 22.11, is scheduled for release in in November, 2022. New features for 22.11 can be submitted during the next 4 weeks:  http://core.dpdk.org/roadmap#dates Please share your roadmap.

Thanks, everyone!

Now Available! DPDK Release 22.03

By Blog

A new release is available: https://fast.dpdk.org/rel/dpdk-22.03.tar.xz

Winter release numbers are quite small,  as usual:

  •  956 commits from 181 authors
  •   2289 files changed, 83849 insertions(+), 97801 deletions(-)

There are no plans for  a maintenance branch for 22.03.

This version is ABI-compatible with 21.11.

  • Below are some new features:
  • Fast restart by reusing hugepages
  • UDP/TCP checksum on multi-segments
  • IP reassembly offload
  • Queue-based priority flow control
  • Flow API for templates and async operations
  • Private ethdev driver info dump
  • Private user data in asymmetric crypto session

More details are available in the official  release notes: https://doc.dpdk.org/guides/rel_notes/release_22_03.html

There are 51 new contributors (including authors, reviewers and testers)!

Welcome to Abhimanyu Saini, Adham Masarwah, Asaf Ravid, Bin Zheng, Brian Dooley, Brick Yang, Bruce Merry, Christophe Fontaine, Chuanshe Zhang, Dawid Gorecki, Daxue Gao, Geoffrey Le Gourriérec, Gerry Gribbon, Harold Huang, Harshad Narayane, Igor Chauskin, Jakub Poczatek, Jeff Daly, Jie Hai, Josh Soref, Kamalakannan R, Karl Bonde Torp, Kevin Liu, Kumara Parameshwaran, Madhuker Mythri, Markus Theil, Martijn Bakker, Maxime Gouin, Megha Ajmera, Michael Barker, Michal Wilczynski, Nan Zhou, Nobuhiro Miki, Padraig Connolly, Peng Yu, Peng Zhang, Qiao Liu, Rahul Bhansali, Stephen Douthit, Tianli Lai, Tudor Brindus, Usama Arif, Wang Yunjian, Weiguo Li, Wenxiang Qian, Wenxuan Wu, Yajun Wu, Yiding Zhou, Yingya Han, Yu Wenjun and Yuan Wang.

Below is the percentage of commits per employer (with authors count):

  A big thank to all courageous people who took on the non rewarding task of reviewing others’ work.

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
         41     Akhil Goyal 
         29     Bruce Richardson 
         26     Ferruh Yigit 
         20     Ori Kam 
         19     David Marchand 
         16     Tyler Retzlaff 
         15     Viacheslav Ovsiienko 
         15     Morten Brørup 
         15     Chenbo Xia 
         14     Stephen Hemminger 
         14     Jerin Jacob 
         12     Dmitry Kozlyuk 
         11     Ruifeng Wang 
         11     Maxime Coquelin 

 The next version of DPDK, 22.07,  will be released in July.

The new features for 22.07 can be submitted during the next 3 weeks:  http://core.dpdk.org/roadmap#dates.

Please share your roadmap.

Thanks everyone!

DPDK 21.11 is Now Available!

By Blog

By David Marchand

A new major release of DPDK, DPDK 21.11, is now available: https://fast.dpdk.org/rel/dpdk-21.11.tar.xz

This is a big DPDK release, with:

  •    1875 commits from 204 authors
  •     2413 files changed, 259559 insertions(+), 87876 deletions(-)

The branch 21.11 should be supported for at least two years, making it recommended for system integration and deployment. The new major ABI version is 22.The next releases 22.03 and 22.07 will be ABI compatible with 21.11.

As you probably noticed, the year 2022 will see only two intermediate releases before the next 22.11 LTS.

Below are some new features, grouped by category:

* General

  •     hugetlbfs subdirectories
  •     AddressSanitizer (ASan) integration for debug
  •     mempool flag for non-IO usages
  •      device class for DMA accelerators and drivers for
  •      HiSilicon, Intel DSA, Intel IOAT, Marvell CNXK and NXP DPAA
  •     device class for GPU devices and driver for NVIDIA CUDA
  •     Toeplitz hash using Galois Fields New Instructions (GFNI)

* Networking

  •     MTU handling rework
  •     get all MAC addresses of a port
  •     RSS based on L3/L4 checksum fields
  •     flow match on L2TPv2 and PPP
  •     flow flex parser for custom header
  •     control delivery of HW Rx metadata
  •     transfer flows API rework
  •     shared Rx queue
  •     Windows support of Intel e1000, ixgbe and iavf
  •     driver for NXP ENETFEC
  •     vDPA driver for Xilinx devices
  •     virtio RSS
  •     vhost power monitor wakeup
  •     testpmd multi-process
  •     pcapng library and dumpcap tool

* API/ABI

  •     API namespace improvements and cleanups
  •     API internals hidden
  •     flags check for future ABI compatibility

More details in the release notes:  http://doc.dpdk.org/guides/rel_notes/release_21_11.html

There are 55 new contributors (including authors, reviewers and testers)! Welcome to:  Abhijit Sinha, Ady Agbarih, Alexander Bechikov, Alice Michael, Artur Tyminski, Ben Magistro, Ben Pfaff, Charles Brett, Chengfeng Ye, Christopher Pau, Daniel Martin Buckley, Danny Patel, Dariusz Sosnowski, David George, Elena Agostini, Ganapati Kundapura, Georg Sauthoff, Hanumanth Reddy Pothula, Harneet Singh, Huichao Cai, Idan Hackmon, Ilyes Ben Hamouda, Jilei Chen, Jonathan Erb, Kumara Parameshwaran, Lewei Yang, Liang Longfeng, Longfeng Liang, Maciej Fijalkowski, Maciej Paczkowski, Maciej Szwed, Marcin Domagala, Miao Li, Michal Berger, Michal Michalik, Mihai Pogonaru, Mohamad Noor Alim Hussin, Nikhil Vasoya, Pawel Malinowski, Pei Zhang, Pravin Pathak, Przemyslaw Zegan, Qiming Chen, Rashmi Shetty, Richard Eklycke, Sean Zhang, Siddaraju DH, Steve Rempe, Sylwester Dziedziuch, Volodymyr Fialko, Wojciech Drewek, Wojciech Liguzinski, Xingguang He, Yu Wenjun, Yvonne Yang.

Below is the percentage of commits per employer:

A big thank you to all courageous people who took on the non-  rewarding task of reviewing other people’s work. 

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
    113    Akhil Goyal 
     83    Ferruh Yigit 
     70    Andrew Rybchenko 
     51    Ray Kinsella 
     50    Konstantin Ananyev
     47    Bruce Richardson 
     46    Conor Walsh 
     45    David Marchand 
     39    Ruifeng Wang 
     37    Jerin Jacob 
     36    Olivier Matz 
     36    Fan Zhang 
     32    Chenbo Xia 
     32    Ajit Khaparde 
     25    Ori Kam 
     23    Kevin Laatz 
     22    Ciara Power 
     20    Thomas Monjalon 
     19    Xiaoyun Li 
     18    Maxime Coquelin 

The new features for 22.03 may be submitted during the next 4 weeks so that we can all enjoy a good break at the end of this year. 2022 will see a change in pace for releases timing, let’s make the best of it to make good reviews.

DPDK 22.03 is scheduled for early March:  http://core.dpdk.org/roadmap#dates

Please share your roadmap.

Thanks everyone!

DPDK 21.08 is Here!

By Blog

By Thomas Monjalon, DPDK Tech Board

 The DPDK community has issued its latest quarterly release, 21.08, which is available here: https://fast.dpdk.org/rel/dpdk-21.08.tar.xz

Stats for this (smaller) release:

  •   922 commits from 159 authors
  •   1069 files changed, 150746 insertions(+), 85146 deletions(-)

There are no plans to start a maintenance branch for 21.08. This version is ABI-compatible with 20.11, 21.02 and 21.05.

New features for 21.08 include:

  • Linux auxiliary bus
  • Aarch32 cross-compilation
  •  Arm CPPC power management
  •  Rx multi-queue monitoring for power management
  •  XZ compressed firmware read
  •  Marvell CNXK drivers for ethernet, crypto and baseband PHY
  •  Wangxun ngbe ethernet driver
  •  NVIDIA mlx5 crypto driver supporting AES-XTS
  •  ISAL compress support on Arm

More technical details about 21.08 are included in the the release notes:

https://doc.dpdk.org/guides/rel_notes/release_21_08.html

There are 30 new contributors (including authors, reviewers and testers): 

Welcome to Aakash Sasidharan, Aman Deep Singh, Cheng Liu, Chenglian Sun, Conor Fogarty, Douglas Flint, Gaoxiang Liu, Ghalem Boudour, Gordon Noonan, Heng Wang, Henry Nadeau, James Grant, Jeffrey Huang, Jochen Behrens, John Levon, Lior Margalit, Martin Havlik, Naga Harish K S V, Nathan Skrzypczak, Owen Hilyard, Paulis Gributs, Raja Zidane, Rebecca Troy, Rob Scheepens, Rongwei Liu, Shai Brandes, Srujana Challa, Tudor Cornea, Vanshika Shukla, and Yixue Wang.

Below is the percentage of commits per employer company:

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:        45     Akhil Goyal 

        34     Jerin Jacob 

        21     Ruifeng Wang

        20     Ajit Khaparde 

        19     Matan Azrad 

        19     Andrew Rybchenko

        17     Konstantin Ananyev 

        15     Chenbo Xia 

        14     Maxime Coquelin 

        14     David Marchand 

        13     Viacheslav Ovsiienko 

        11     Thomas Monjalon 

         9     Dmitry Kozlyuk 

         8     Stephen Hemminger 

         8     Bruce Richardson 

 

The next DPDK release, 21.11, will be robust.  

New features for 21.11 can be submitted during for the next month, at this link: http://core.dpdk.org/roadmap#dates.  Please  be sure to share your features roadmap.

Thanks to all involved! Enjoy the rest of the summer!

DPDK Testing Approaches

By Blog

The DPDK (https://www.dpdk.org) project presents a unique set of challenges for fully testing the code, with the largest coverage possible (i.e. the goal of continuous integration and gating). The DPDK testing efforts are overseen by the Community CI group, which meets every other week to review status and discuss open issues with the testing. Before we drive into the details of the testing, the CI group has developed some terminology to describe the different types of testing used within the DPDK CI efforts. First, and probably most obvious, is the simple “compile” testing, or the “does it build” type testing. This is typically referred to as “Compile Testing” by the CI group. Next up is the “Unit Testing,” which refers to running the unit tests provided directly within the DPDK source code tree and maintained by the DPDK developer community. From there, we have two more categories of testing, as “Functional Testing” and “Performance Testing.”  In these cases, the CI community is using the terms to refer to cases where the DPDK stack is operational on the test system (i.e. PMD drivers are bound). Next, we’ll delve a little further into each of the testing types, some of the system requirements, etc.

Compile testing also has the advantage of remaining relatively “uncoupled” from the testing infrastructure. For example, it doesn’t generally depend on specific hardware or PMDs. There are some small exceptions, around architecture, where the compile operation can be a “native” or a “cross compile” target. In the current CI testing, compile jobs are run natively on x86_64 and aarch64 architectures in the community lab, along with an additional cross compile case (x86_64 to aarch64). Another dimension of the compile testing involves the actual OS, libraries, and compiler used for the “test.”  In the community lab, thus far, most of the compile jobs run using GCC versions across various operating systems. The lab aims to maintain coverage of all operating systems officially supported by the current DPDK project releases, along with some additional common OSes, accomplished by running the compile jobs in containers. This greatly simplifies the OS update and maintenance processes, since we can always just update our container images to the latest base image versions, etc. Lastly, for a few of the OSes, the testing is also running on the Clang compiler as well.

Where the Compile Testing leaves of, the Unit Testing picks up. These tests start to run core parts of the compiled DPDK code, flexing operations involving memory allocations and event triggers. Since the unit testing is running parts of the compiled DPDK code, it does start to involve some requirements on the underlying infrastructure. For example, the unit testing expects the system to be prepared with some available hugepages and memory available for the unit testing consumption. In the Community Lab (https://www.iol.unh.edu/hosted-resources/dpdk-performance-lab), the unit testing is still running within the container based infrastructure. The containers run with elevated privileges to support the hugepages access, however, the automation system limits to running only one unit testing container at a time (other compile testing might still be running in parallel). This ensures the unit testing has full access to the required memory and there is only a single “instance” of DPDK running on the system. 

With Functional Testing and Performance Testing, the tool chain switches to use DTS (DPDK Test Suite), which is a Python based tool chain, developed within the community to specifically test DPDK running on bare-metal systems. The community splits the testing based around cases that focus on functionality and performance testing.  Functionality testing verifies items such as configuring a packet filter or frame manipulation and verifying the stack is implementing the function. Performance Testing is focused on verification of the stack against a specific benchmark, such as the maximum packet forwarding rate. In both types of testing, it’s run directly on bare-metal hardware as a combination of system architecture and the actual network interface or PMD. This ensures DPDK code is tested with different interface vendor hardware combinations. The hardware vendors supporting the DPDK testing efforts keep the interface devices and firmware updated to their latest versions, so as the DPDK code evolves, so does the test environment and its hardware / firmware.

Hopefully the above descriptions have helped to explain the testing categories and approaches being taken by the DPDK CI Community efforts to completely test the DPDK project code. In future blogs we get into more detail about how / when the different types of testing are run, i.e. triggered by / for new patches or run periodically on specific code branches. The CI Community meets bi-weekly, on Thursdays at 1pm UTC and all DPDK community members are welcome to join and ask questions or help contribute to the testing efforts. Similarly the community mailing list, ci@dpdk.org, is also a resource as well.

Reader-Writer Concurrency

By Blog

By Honnappa Nagarahalli

As an increasingly higher number of cores are packed together in a SoC, thread synchronization plays a key role in scalability of the application. In the context of networking applications, because of the  partitioning of the application into control plane (write-mostly threads) and data plane (read-mostly  threads), reader-writer concurrency is a frequently encountered thread synchronization use case.  Without an effective solution for reader-writer concurrency, the application will end up with: 

  • race conditions, which are hard to solve 
  • unnecessary/excessive validations in the reader, resulting in degraded performance excessive use of memory barriers, affecting the performance 
  • and, finally, code that is hard to understand and maintain 

In this blog, I will 

  • briefly introduce the reader-writer concurrency problem 
  • talk about solving reader-writer concurrency using full memory barriers and the C11 memory  model 
  • and, talk about complex scenarios that exist 

Problem Statement 

Consider two or more threads or processes sharing memory. Writer threads/processes (writers) update a data structure in shared memory in order to convey information to readers. Reader threads/processes (readers) read the same data structure to carry out some action. E.g.: in the context of a networking  application, the writer could be a control plane thread writing to a hash table, the reader could be a data  plane thread performing lookups in the hash table to identify an action to take on a packet. 

Essentially, the writer wants to communicate some data to the reader. It must be done such that the  reader observes the data consistently and atomically instead of observing an incomplete or  intermediate state of data. 

This problem can easily be solved using a reader-writer lock. But locks have scalability problems, and a  lock free solution is required so that performance scales linearly with the number of readers. 

Solution 

In order to communicate the ‘data’ atomically, a ‘guard variable’ can be used. Consider the following  diagram.

The writer sets the guard variable atomically after it writes the data. The write barrier between the two  ensures that the store to data completes before the store to guard variable. 

The reader reads the guard variable atomically and checks if it is set. If it is set, indicating that the writer  has completed writing data, the reader can read the data. The read barrier between the two ensures  that the load of data starts only after the guard variable has been loaded. i.e. the reader is prevented  from reading data (speculatively or due to reordering) till the guard variable is set. There is no need for any additional validations in the reader. 

As shown, the writer and the reader synchronize with each other using the guard variable. The use of  the guard variable ensures that the data is available to the reader atomically irrespective of the size of  the data. There is no need of any additional barriers in the writer or the reader. 

Some of the CPU architectures enforce the program order for memory operations in the writer and the  reader without the need for explicit barriers. However, since the compiler can introduce re-ordering, the  compiler barriers are required on such architectures. 

So, when working on a reader-writer concurrency use-case, the first step is to identify the ‘data’ and the  ‘guard variable’. 

Solving reader-writer concurrency using C11 memory model 

In the above algorithm, the read and write barriers can be implemented as full barriers. However, full  barriers are not necessary. The ordering needs to be enforced between the memory operations on the  data (and other operations dependent on data) and the guard variable. Other independent memory  operations need not be ordered with respect to memory operations on data or guard variable. This provides CPU micro-architectures more flexibility in re-ordering the operations while executing the  code. C11 memory model allows for expressing such behaviors. In particular, C11 memory model allows  for replacing the full barriers with one-way barriers. 

As shown above, the writer uses the atomic_store_explicit function with memory_order_release to set  the guard variable. This ensures that the write to data completes before the write to guard variable completes while allowing for later memory operations to hoist above the barrier. 

The reader uses the atomic_load_explicit function with memory_order_acquire to load the guard  variable. This ensures that the load of the data starts only after the load of the guard variable completes while allowing for the earlier operations to sink below the barrier. 

atomic_store_explicit and atomic_load_explicit functions ensure that the operations are atomic and  enforce the required compiler barriers implicitly. 

Challenges 

The above paragraphs did not go into various details associated with data. The contents of data, support  for atomic operations in the CPU architecture, and support for APIs to modify the data can present various challenges. 

Size of data is more than the size of atomic operations 

Consider data of the following structure.

The size of the data is 160 bits, which is more than the size of the atomic operations supported by the CPU architectures. In this case, the guard variable is required to communicate the data atomically to the  reader. 

If support required is for add API alone, the basic algorithm described in above paragraphs is sufficient.  The following code shows the implementation of writer and reader. 

However, several challenges exist if one wants to support an API to modify/update data depending on  the total size of modifications. 

Size of the modified elements is larger than the size of atomic operations

Let us consider the case when the total size of modifications is more than the size of atomic operations supported. 

Since all modified elements of data need to be observed by the reader atomically, a new copy of the  data needs to be created. This new copy can consist of a mix of modified and unmodified elements. The  guard variable can be a pointer to data or any atomically modifiable variable used to derive the address  of data, for ex: an index into an array. After the new copy is created, the writer updates the guard  variable to point to the new data. This ensures that, the modified data is observed by the reader  atomically.

As shown above, the elements b, c and d need to be updated. Memory at addr 0x200 is allocated for the  new copy of data. Element a is copied from the current data. Elements b, c and d are updated with new  values in the new copy of data. The guard variable is updated with the new memory address 0x200. This  ensures that the modifications of the elements b, c and d appear atomically to the reader. 

In the reader, there is dependency between data and the guard variable as data cannot be loaded  without loading the guard variable. Hence, the barrier between the load of the guard variable and data  is not required. While using C11 memory model, memory_order_relaxed can be used to load the guard  variable. 

The following code shows the implementation of writer and reader.

Note that, the writer must ensure that all the readers have stopped referencing the memory containing  the old data before freeing it. 

Size of the modified elements is equal to the size of atomic operations 

Now, consider the case when the total size of modifications is equal to the size of atomic operations supported. 

Such modifications do not need a new copy of data as the atomic operations supported by the CPU  architecture ensure that the updates are observed by the reader atomically. These updates must use  atomic operations. 

As shown above, if only element b has to be updated, it can be updated atomically without creating a  new copy of data in the writer. The store and load operations in writer and reader do not require any  barriers. memory_order_relaxed can be used with atomic_store_explicit and atomic_load_explicit  functions. 

Size of data is equal to the size of atomic operations 

Consider a structure for data as follows 

The size of the data is 64 bits. All modern CPU architectures support 64 bit atomic operations. 

In this case, there is no need of a separate guard variable for both add and modify APIs. Atomic store of  the data in writer and atomic load of the data in reader is sufficient. The store and load operations do  not require any barrier in the writer or reader either. memory_order_relaxed can be used with both  atomic_store_explicit and atomic_load_explicit functions. As shown above, if required, part of the data  can indicate if it contains valid data. 

Conclusion 

So, whenever faced with a reader-writer synchronization issue, identify the data, consider if a guard  variable is required. Keep in mind that what matters is the ordering of memory operations and not when  the data is visible to the reader. Following the methods mentioned above will ensure that unnecessary  barriers and validations are avoided, and the code is race free and optimized.

Community Issues DPDK 21.05 Release

By Blog

By Thomas Monjalon, DPDK Tech Board

A new DPDK release is now available, 21.05:  https://fast.dpdk.org/rel/dpdk-21.05.tar.xz

It was a quite big cycle, as is typical with our May releases:

  •  1352 commits from 176 authors
  •   2396 files changed, 134413 insertions(+), 63913 deletions(-)

There are no plans to start a maintenance branch for 21.05, and this version is ABI-compatible with 20.11 and 21.02.

Below are some new features:

  • General
    • compilation support for GCC 11, clang 12 and Alpine Linux
    •  mass renaming of lib directories
    • allow disabling some libraries
    • log names alignment and help command
    •  phase-fair lock
    • Marvell CN10K driver
  • Networking
    • predictable RSS
    •  port representor syntax for sub-function and multi-host
    • metering extended to flow rule matching
    • packet integrity in flow rule matching
    •  TCP connection tracking offload with flow rule
    •  Windows support of ice, pcap and vmxnet3 drivers

More details in the release notes:  https://doc.dpdk.org/guides/rel_notes/release_21_05.html

There are 41 new contributors (including authors, reviewers and testers)!  Welcome to Alexandre Ferrieux, Amir Shay, Ashish Paul, Ashwin Sekhar T K, Chaoyong He, David Bouyeure, Dheemanth Mallikarjun, Elad Nachman, Gabriel Ganne, Haifei Luo, Hengjian Zhang, Jie Wang, John Hurley, Joshua Hay, Junjie Wan, Kai Ji, Kamil Vojanec, Kathleen Capella, Keiichi Watanabe, Luc Pelletier, Maciej Machnikowski, Piotr Kubaj, Pu Xu, Richael Zhuang, Robert Malz, Roy Shterman, Salem Sol, Shay Agroskin, Shun Hao, Siwar Zitouni, Smadar Fuks, Sridhar Samudrala, Srikanth Yalavarthi, Stanislaw Kardach, Stefan Wegrzyn, Tengfei Zhang, Tianyu Li, Vidya Sagar Velumuri, Vishwas Danivas, Wenwu Ma and Yan Xia.

Below is the percentage of commits per employer company:

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:

         46     Ferruh Yigit 
         39     David Marchand 
         34     Andrew Rybchenko 
         30     Bruce Richardson
         27     Maxime Coquelin 
         26     Viacheslav Ovsiienko
         25     Akhil Goyal 
         23     Thomas Monjalon 
         22     Jerin Jacob 
         20     Ajit Khaparde 
         18     Xiaoyun Li 
         18     Anatoly Burakov 
         16     Ori Kam 
         12     Matan Azrad 
         12     Konstantin Ananyev 
         12     Honnappa Nagarahalli 
         11     Olivier Matz 
         11     Hemant Agrawal 
         10     Ruifeng Wang 

The new features for 21.08 may be submitted through 1 June.

DPDK 21.08 should be released on early August, in a tight schedule: http://core.dpdk.org/roadmap#dates

Please share your features roadmap.

Thanks, everyone, for allowing us to close a great 21.05 on 21/05!