Get the latest updates in the December dispatch.
Skip to main content
All Posts By

benthomas

DPDK Dispatch August

By Monthly Newsletter


1. Main Announcements


Register now for the DPDK Summit Montreal Sep 24
Highlights from DPDK Summit APAC: Updates in Brief
DPDK Summit APAC Sessions now live on youtube

2. User Stories & Dev Spotlights

Submit a blog here
Submit a developer spotlight here

3. DPDK & Technologies in the news:


Unleashing 100GbE network efficiency: SR-IOV in Red Hat OpenShift on OpenStack
Introduction to Elastic Load Balancing in AWS
Redox: An operating system in Rust [LWN.net]
DataCore gets AI development funding dollars – Blocks and Files
Netgate Releases TNSR Software Version 24.06
5G Factor: OCI, AWS, Google Cloud Make Major Telco Moves
Red Hat Performance and Scale Engineering
IABM Technology and Trends Roadmap


4. Performance Reports & Meeting Minutes

Latest performance reports
Maintainer meeting minutes
Gov board meeting minutes 

This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org

 
Thank you for your continued support and enthusiasm.

DPDK Team.

Highlights from DPDK Summit APAC

By Blog

Welcome & Opening Remarks – Thomas Monjalon, Maintainer, NVIDIA

Thomas Monjalon, a maintainer at NVIDIA, opened the summit in Bangkok by emphasizing the importance of the community in the project. He highlighted the role of contributors like Ben Thomas, who handles marketing, and encouraged attendees to share their stories and get involved

Thomas provided logistical details about and discussed the project’s history, noting its growth since its inception in 2010 and the support from the Linux Foundation. Looking ahead, Monjalon outlined priorities such as better public cloud integration, enhanced security protocols, and contributions to AI. 

He stressed the long-term benefits of contributing to open source, including thorough documentation and community support, and noted that being part of the community can help individuals find new job opportunities.

Introducing UACCE Bus of DPDK – Feng Chengwen, Huawei Technologies Co., Ltd

Feng Chengwen from Huawei Technologies presented on the UACCE (Unified Accelerator Framework) integrated into DPDK. UACCE was designed to simplify usage and enhance performance and security for user space I/O and DMA operations without system involvement. It was upstreamed in version 5.7 of the Linux kernel and the latest DPDK release 24.03, allowing accelerators to access memory regions directly and eliminating address translation. 

Key objectives include high performance, simplified usage, and security, with support for multiprocess memory acceleration and on-demand resource usage. UACCE addresses performance issues such as page faults and NUMA balancing using techniques like CPU pre-access and memory binding. 

It is used in both host systems and virtual machines, though some features for virtual machines are still in development. Feng encouraged other developers to adopt UACCE, highlighting its broader application potential, and discussed future enhancements to integrate more devices into DPDK using the UACCE framework.

ZXDH DPU Adapter and It’s Application – Lijie Shan & Wang Junlong, ZTE

The presentation introduces the ZXDH DPU driver, highlighting its features, applications, and product portfolio. The DPU system framework includes modules for high-speed network interfaces, PCI connectivity, and advanced packet processing, supporting RDMA, NVMe protocols, and multiple accelerators for security and storage. 

It enhances network and storage performance by offloading tasks from the host CPU, supporting virtualization, AI, and edge computing, with capabilities like TLS encryption. An example of offloading security group functions to the DPU demonstrates reduced CPU load and increased processing efficiency. 

The product portfolio supports up to 5 million IOPS and 100 million packets per second, with ongoing development to improve TCP protocol handling and storage acceleration.

Libtpa Introduction – Yuanhan Liu, Bytedance

Yuanhan Liu from ByteDance introduces Libtpa, a user-space TCP/IP stack developed from scratch. The presentation discusses its background, design, testing, and performance. 

Traditional kernel-based TCP/IP stacks have inefficiencies and overhead, and existing user-space TCP stacks face problems like breaking the kernel stack, limited zero-copy support, and inadequate testing and debugging tools. 

Libtpa addresses these issues by allowing coexistence with the kernel stack, supporting multiple user-space instances, optimizing performance with zero-copy, and providing extensive testing and debugging capabilities. 

Its architecture supports high throughput and low latency, achieving significant performance improvements. Libtpa includes over 200 unit tests and advanced debugging tools to ensure stability and ease of troubleshooting in production environments.

Telecom Packet Processing and Correlation Engine Using DPDK – Ilan Raman, Aviz Networks

Ilan Raman and his colleagues from Aviz Networks developed 5G packet processing applications using DPDK to manage complex 4G and 5G traffic on commodity hardware efficiently. Aviz Networks, founded in 2019 and operating in the USA, India, and Japan, focuses on providing open-source solutions for telecom operators. 

They address challenges in monitoring evolving mobile technologies, scaling solutions horizontally, and reducing TCO through software-driven approaches. A primary use case is 5G correlation, which enhances network performance monitoring by correlating control and user traffic. Deployment involves DPDK-based applications on commodity hardware, processing high-bandwidth traffic, and extracting valuable metadata for insights and capacity planning. 

The architecture uses a run-to-completion model, distributing functions across dedicated cores to handle various traffic types, with scalability achieved through RSS functionality in NICs. Practical learnings include configuring RSS for different packet types, ensuring symmetric load balancing, using per-core hash tables, isolating DPDK cores from the Linux kernel, and performing deep packet parsing in software. 

Aviz Networks leveraged DPDK’s packet manipulation libraries for handling custom headers and achieved better performance through memory optimizations and CPU isolation.

Unified Representor with Large Scale Ports – Suanming Mou, NVIDIA Semiconductor

The presentation by Suanming Mou from NVIDIA focused on optimizing the unified representer in large-scale ports within DBK switch models. Initially, the high memory and CPU usage due to the need to poll all represent ports when packets missed hardware flow rules was a significant challenge. 

The optimization approach involved setting “represent matching” to zero, directing all packets to a single uplink represent port, and copying the source port ID to packet metadata for identification in the hypervisor. This change reduced the need for extensive memory allocation and CPU polling as traffic was handled through a single proxy port. 

The implementation of new flow rules for this setup resulted in substantial memory savings, decreasing from over 800 MB to around 332 MB, and improved packets per second (PPS) performance, increasing from 20 Mega PPS to 27.5 Mega PPS due to optimized polling and reduced cache misses. Overall, the optimization streamlined the polling process and significantly enhanced resource efficiency and performance in managing large-scale port traffic.

Troubleshooting Low Latency Application on CNF Deployment – Vipin Varghese & Sivaprasad Tummala, AMD

The presentation addresses the challenges encountered when transitioning applications from bare metal to container environments, emphasizing issues like reduced throughput, increased packet processing time, fluctuating latencies, and unpredictable performance with multiple container instances. 

Root causes of these problems include limited access to hardware resources, library and compiler version mismatches, lack of specific patches, and performance variations based on hardware architecture and deployment models. Through several case studies, Vipin and Sivaprasad underscore the importance of profiling applications on bare metal before deployment, using tools like flame graphs and perf, and understanding hardware details such as cache domains and PCI bus partitioning for optimization. 

They call for enhanced telemetry and observability in DPDK for containerized environments, noting that current tools and documentation are inadequate for complex troubleshooting. Recommendations include extending DPDK’s telemetry infrastructure, utilizing eBPF hooks for improved runtime data collection, and ensuring consistent performance through better documentation, custom plugins for CPU isolation, and awareness of hardware-specific optimizations.

Suggestions to Enhance DPDK to Enable Migration of User Space Networking Applications to DPDK – Vivek Gupta, Benison Technologies Pvt Ltd

The presentation by Vivek Gupta delves into enhancing DPDK to facilitate the migration of various user space networking applications, pinpointing a critical issue: advancements in CPU, IO, and memory technologies are not benefiting these applications. Despite significant improvements in infrastructure, user space networking applications often fail to utilize these advancements effectively. This gap highlights the need for a framework that can bring the benefits of these technological improvements to user space frameworks, ensuring better performance and efficiency.

Customers face numerous challenges when attempting to migrate their existing user space applications to DPDK or VPP environments without rewriting them. These applications, which traditionally rely on Linux kernel methods, encounter significant hurdles during migration. The proposed solution is to create a unified framework that integrates various technologies, such as EF VI and VPP, to enhance the performance of these applications. This framework would support different levels of packet processing, from L2 to L4, and provide essential mechanisms for encryption, decryption, deep packet inspection, and proxy functions.

To meet customer needs, the framework should enable applications to capture and inject packets at various levels, from the interface to higher layers. It should support state management and route updates from control applications, ensuring that applications always operate with the most current data. Additionally, the framework must offer accelerators for cryptographic and AI/ML-based processing to handle the complex requirements of modern applications. By addressing issues related to threading, caching, and reducing contention, the framework aims to significantly improve the performance of user space applications.

Practical examples underscore the potential benefits of this framework. For instance, enhancing web servers, proxy servers, and video streaming applications using the proposed framework could lead to substantial performance gains. By tackling issues such as blocking operations and optimizing thread management, applications can achieve higher throughput and better resource utilization. The framework should also cater to the needs of high-speed applications and support flexible application architectures, enabling user space applications to become more efficient and faster.

In conclusion, the proposed enhancements to DPDK aim to bridge the gap between advancements in infrastructure and the performance of user space networking applications. By providing a comprehensive framework that supports various processing levels, state management, and cryptographic acceleration, the solution promises to improve application performance, reduce contention, and enhance resource utilization. This approach will help customers migrate their applications more effectively and realize the full benefits of technological advancements in CPU, IO, and memory technologies.

Welcome Back – Prasun Kapoor, Associate Vice President, Marvell

The Asia Pacific (APAC) region, particularly India and China, has established a strong and dynamic community around the Data Plane Development Kit (DPDK). Recognizing this, the decision was made to hold the DPDK APAC Summit in Thailand, a geopolitically neutral location that facilitates easy participation from various APAC countries without visa complications.

The DPDK project is witnessing robust growth in multiple areas, including technical contributions, marketing outreach, and the number of active contributors. This growth is further bolstered by increasing interest from new prospective corporate members, indicating a healthy and expanding ecosystem.

Significant updates have been made to the University of New Hampshire (UNH) lab, which has recently incorporated the Marvell CN10K Data Processing Unit (DPU) into its testing suite. The lab now reports Data Test Suite (DTS) results for a variety of tests, and has established a community dashboard for code coverage, releasing monthly reports. Additionally, the lab has been proactive in submitting patches and bug fixes and is running compilation tests for Open vSwitch (OVS) with each DPDK patch, with future plans to include performance testing.

Marketing efforts for DPDK have seen a considerable boost, with increased engagement on platforms like LinkedIn and a notable rise in YouTube views, which is seen as a leading indicator of the project’s growing interest. The steady increase in DPDK downloads further underscores the project’s rising popularity.

Enhancements to the DPDK documentation have also been a focal point, with updates to the Poll Mode Driver (PMD) guidelines, security protocol documentation, and multiple sections of the programmer’s guide and contributor guidelines. Financially, the DPDK project is in a strong position with a healthy budget and substantial reserves. This financial stability ensures that key activities such as summits, community labs, and marketing efforts are well-supported for the foreseeable future.

Coupling Eventdev Usage with Traffic Metering & Policing (QoS) – Sachin Saxena & Apeksha Gupta, NXP

Sachin Saxena and Apeksha Gupta from NXP presented on integrating Eventdev with Traffic Metering and Policing to enhance Quality of Service (QoS). They discussed the various requirements from customers and the comprehensive solution they developed to meet these demands. Their goal was to share their extensive work and experiences with the community, offering insights into how similar challenges can be addressed effectively.

They highlighted different customer requirements, such as the need for traffic classification and scheduling in hardware, reducing CPU cycle usage, and implementing custom schedulers. By leveraging the DPDK framework, they managed to consolidate these varied needs into a generic solution. This approach not only met the specific requirements but also provided a reference for others in the community who might face similar challenges.

The technical approach of their solution involved utilizing DPDK’s metering, policing, and Eventdev frameworks. They explained how these components interact to meet the specified use cases, enhancing overall efficiency and performance. By breaking down complex use cases into manageable components and mapping these to corresponding RT library elements, they ensured a robust end-to-end functionality.

In their implementation details, they described the method of segmenting use cases into multiple components and aligning these with the appropriate RT library components. This strategy ensured that each part of the system worked seamlessly together, achieving the desired outcomes effectively and efficiently.

To illustrate their points, they shared practical use cases, including the management of scheduling priorities, grouping multiple ports, and applying markers and policers at the priority group level. These examples demonstrated how to optimize CPU cycles and prevent data loss, showcasing the practical applications of their solution in real-world scenarios.

GRO Library Enhancements – Kumara Parameshwaran Rathinavel, Microsoft

Kumara Parameshwaran Rathinavel from Microsoft has been working on enhancing the Generic Receive Offload (GRO) library. GRO is a widely used software technique that optimizes packet processing by merging multiple TCP segments into a single large segment. Kumara has been contributing to this project since his time at VMware and continues to do so at Microsoft. His work aims to improve the efficiency and performance of GRO, particularly in the context of network traffic handling.

The current implementation of GRO, which involves iteratively checking a table for flow matches, has been identified as suboptimal for handling packets received in multiple bursts. This method can lead to inefficiencies, especially as the timeout intervals increase. Kumara highlighted that the existing approach struggles with scalability and performance under these conditions, necessitating a more efficient solution.

To address these limitations, Kumara proposed a hash-based method for flow matching. This new approach significantly enhances the efficiency of the GRO process. In tests, the hash-based method demonstrated substantial performance improvements, reducing the CPU utilization of the GRO reassemble function. This method not only optimizes the flow matching process but also ensures better handling of packet bursts, leading to overall improved performance.

Recognizing the varying latency requirements of different applications, Kumara suggested implementing tuple-specific timeouts within the GRO framework. This approach allows for more flexible and optimized GRO settings tailored to the specific needs of various applications. For instance, applications with low latency requirements, such as banking transactions, can have shorter timeouts, while those with less stringent latency needs can benefit from longer timeouts. This customization ensures that all applications can operate efficiently without compromising on performance.

To validate these enhancements, Kumara used a setup involving a virtual machine as a test proxy, demonstrating notable performance gains. The improvements in GRO are particularly beneficial for network applications like load balancers, where reducing CPU utilization and improving packet processing efficiency are critical. Kumara’s work on GRO library enhancements showcases significant advancements in optimizing network traffic handling, contributing to more efficient and scalable network performance.

Refactor Power Library for Vendor Agnostic Uncore APIs – Sivaprasad Tummala & Vipin Varghese, AMD

The presentation focuses on the critical need for improved power management and efficiency for Telco operators, emphasizing the importance of vendor-agnostic solutions for scalability across different platforms. This is particularly relevant as power has become a significant concern, with the need to optimize performance per watt and manage power effectively.

AMD’s power library within the DPDK (Data Plane Development Kit) aims to address these concerns by balancing power consumption and performance. The library optimizes core and uncore frequency management and introduces adaptive algorithms for real-time monitoring and idle state management. This ensures that while cores are busy polling, they consume power efficiently without compromising on performance.

Currently, the power library is tightly coupled with Linux and requires specific modifications to accommodate new drivers, leading to inefficiencies and increased code size. Each new driver introduction necessitates changes to the core library, increasing the complexity and effort required for maintenance and updates. This approach is not scalable as the number of drivers and their capabilities grow.

To address these challenges, the refactoring efforts aim to modularize the power library, enabling plug-and-play capabilities for new drivers and reducing dependencies. This modular approach will simplify the addition of new drivers, improve performance, and enhance scalability by minimizing the library’s footprint and code complexity.

The proposed enhancements include vendor-agnostic uncore APIs to manage interconnect bus frequencies and dynamic link width management. These APIs promote a standardized interface for power management across different hardware vendors, making it easier for applications to develop power management solutions without being tied to specific vendors. This approach not only reduces complexity but also ensures compatibility and scalability across various platforms.

Q&A with the Governing Board & Technical Board – Wang Yong, Thomas Monjalon, Jerin Jacob

The Technical Board (TBoard) and Governing Board (GBoard) of the project play distinct but complementary roles in steering the community. The TBoard consists of 11 members who meet bi-weekly to discuss and resolve technical issues. These meetings, conducted via Zoom, involve all community members and focus on consensus-driven decision-making. When consensus cannot be reached, the TBoard votes on issues, requiring prior email submissions for agenda inclusion. This structured approach ensures thorough consideration and discussion before decisions are made.

The GBoard, on the other hand, sets the project’s broad direction, encompassing administrative tasks, marketing strategies, and budgeting. This board meets monthly and includes a permanent chairperson along with representatives from the Linux Foundation. The GBoard comprises 12 members: 10 from golden member companies, one from a silver member, and one from the TBoard. Every six weeks, the GBoard convenes, and every three months, they hold joint meetings with the TBoard to align on financial plans, marketing efforts, and major project decisions.

Membership in the GBoard is tiered, with gold members contributing $50,000 annually and silver members contributing $10,000 annually. These funds are crucial for project initiatives, such as summits and acquiring new servers for the lab. Gold members play a significant role in decision-making due to their financial contributions, ensuring their interests and investments are aligned with project goals.

Community involvement is a cornerstone of both boards’ operations. TBoard meetings are open to all, fostering transparency and inclusivity in technical discussions. Issues are raised via email, ensuring that all voices can be heard. The GBoard, while more focused on strategic direction, includes representatives from various companies to bring diverse perspectives to the table. This collaborative approach allows for comprehensive planning and execution of project initiatives.

Currently, the boards are prioritizing several key areas: enhancing security protocols and documentation, improving continuous integration (CI) performance testing, and integrating more functional testing in the Data Plane Development Kit (DTS). Future plans include creating a performance dashboard and requiring contributors to add tests for new features. These efforts aim to maintain high standards of performance and security, ensuring the project’s robustness and reliability for all users.

Rte_flow Match with Comparison Result – Suanming Mou, NVIDIA Semiconductor

The presentation introduces a new feature for rte_flow, which focuses on comparison operations to enhance the flexibility of flow rules. This feature allows for comparisons between fields or between a field and an immediate value, providing more dynamic and versatile rule configurations. The presenter assumes familiarity with rte_flow from previous sessions and emphasizes the advantages of this new capability.

Traditional rte_flow rules are limited to matching immediate values, which can be restrictive in certain scenarios. For instance, in TCP connection tracking, the termination of connections often goes unnoticed by software if the reset packet is handled by hardware directly. Similarly, for packet payload evaluation, users may want to skip cryptographic operations on packets without payloads. These examples highlight the need for more advanced comparison methods in flow rules.

The new feature supports a range of comparison operations, including greater than, less than, and equal comparisons. It has been initially implemented in ConnectX-7 and BlueField-3 NICs, specifically within the template API. However, there are limitations, such as the inability to mix comparison items with other items and restricted field support. The feature is designed to be flexible but currently has hardware constraints that limit its full potential.

Users can configure these comparison rules using a new `item compare` structure in the API. This involves specifying the fields to compare, the immediate values, and the desired operations, such as equal, not equal, greater than, and so forth. The configuration also supports specific bit-width comparisons, providing detailed control over how comparisons are executed. This structure aims to offer a robust framework for implementing dynamic and complex flow rules.

Several examples demonstrate the use of the new comparison item in flow rules, illustrating its practical application. Despite its benefits, the feature currently supports only single comparison rules within flow tables and a limited range of fields. The requirement for both spec and mask in the configuration is due to the template API structure, which mandates these elements even if they might not be necessary for all comparisons. Suanming Mou concludes by encouraging other developers to integrate support for this feature in their PMDs, recognizing its potential to significantly enhance rte_flow’s capabilities.

DPDK PMD Live Upgrade – Rongwei Liu, Nvidia

The DPDK PMD live upgrade process is designed to meet the critical need for upgrading or downgrading PMD versions seamlessly without disrupting ongoing services. This process ensures the transfer of user configurations while minimizing downtime to nearly zero, making it essential for applications requiring continuous operation.

Two primary approaches are detailed for conducting these upgrades. The first approach involves a graceful exit of the old PMD followed by the restart of the new PMD, during which there is a brief period of service unavailability. The second approach utilizes a standby mode, where the new PMD is prepared with the necessary configurations but remains inactive until the old PMD exits. This method ensures that there is no service disruption as the traffic seamlessly switches to the new PMD once the old one exits.

To facilitate this process, two modes are introduced: active and standby. In the active mode, the PMD manages traffic and hardware configuration directly. In standby mode, configurations are set up but do not affect traffic immediately. Instead, they become active only when the old PMD gracefully exits, ensuring that the traffic handling transitions smoothly without any interruption.

A crucial aspect of the upgrade process is the use of group zero as a dispatcher for traffic processing rules. This mechanism ensures that all configurations are synchronized and become effective immediately, eliminating any downtime or disruption in traffic flow. By inserting and managing these rules efficiently, the system can transition from the old PMD to the new one seamlessly.

Finally, the process is designed to be highly scalable, allowing for adaptable resource usage to accommodate various deployment scales. It also emphasizes the importance of a user-friendly API, ensuring that users can access and utilize the upgrade features quickly and easily, thus enhancing the overall efficiency and effectiveness of the live upgrade process.

Monitoring 400G Traffic in DPDK Using FPGA-Based SmartNIC with RTE Flow – David Vodák, Cesnet

David Vodák from Cesnet presented their journey to enable 400G traffic monitoring using DPDK and FPGA-based SmartNICs, a project initiated due to the lack of suitable FPGA cards in the market. Cesnet, a national research and educational network, designed the FPGA SmartNIC which utilizes the Intel HX I7 chip with 400G Ethernet support and PCIe gen 4/5 compatibility. This card is engineered for high-speed processing, making it ideal for their needs.

The cornerstone of their solution is the NDK platform, an open-source framework that supports up to 400G throughput. NDK facilitates parallel processing, filtering, and metadata export, which are crucial for handling high-speed network traffic. It is designed to be highly adaptable, allowing users to create new components or use existing ones to build custom firmware for various applications.

NDK’s versatility extends beyond monitoring; it is also used for high-frequency trading and CL testing. One of the open source tools developed by Cesnet, the IPFIXPROBE, supports DPDK and is employed to create detailed traffic flows from input packets. This probe exemplifies the practical applications of NDK in real-world scenarios, demonstrating its robustness and flexibility.

To ensure the reliability of their solutions, Cesnet employs rigorous testing and verification methods. Functional testing is conducted using tools like testpmd or custom DPDK applications in loopback setups. For benchmarking, they utilize external traffic generators such as Spent and Flow Test, with the latter capable of simulating realistic network traffic to provide more accurate testing results.

Looking ahead, Cesnet plans to expand the capabilities of the NDK platform to support various cards and use cases beyond traffic monitoring. Their commitment to open source development is evident, as they provide extensive resources on GitHub for the community to collaborate and innovate further. This open source approach not only fosters community involvement but also drives continuous improvement and adaptation of their technology.

Lessons Learnt from Reusing QDMA NIC to Base Band PMD – Vipin Varghese & Sivaprasad Tummala, AMD

AMD undertook a project to repurpose its QDMA NIC for Forward Error Correction (FEC) offloading in virtual RAN environments using an FPGA-based prototype. The goal was to support LDPC encode/decode functionalities without developing the BBDEV PMD from scratch. Instead, the team adapted existing QDMA NIC code, incorporating necessary modifications to create a functional BB PMD. This approach allowed for rapid prototyping and integration within a short time frame.

Throughout the project, several challenges arose, including mismatched use cases, software latencies, and inadequate thread handling. To address these, the team implemented solutions such as using selective builds and applying compiler pragmas. These strategies helped optimize the RX/TX burst functionalities and reduce instruction cache misses, which in turn minimized overall latency.

Significant efforts were made to reduce latencies, which were initially high due to multiple factors including software and PMD-related overheads. By minimizing instruction cache misses and optimizing critical functions to fit within smaller memory pages, the team achieved notable latency reductions. Further improvements were realized by implementing lockless mechanisms using RT rings, ensuring efficient NQ/DQ operations.

To meet the specific requirements of the customer for low-latency and high-throughput, the team had to go beyond simple test scenarios and adapt the software implementations to better match real-world use cases. This involved modifying the BBDEV PMD to handle multiple threads and ensuring proper mapping and distribution of LDPC encode and decode requests, which significantly improved performance and reliability.

The project highlighted several important lessons. It underscored the value of reusing existing PMDs when feasible, as well as the need to reduce code bloating and align PMD examples with actual customer use cases. The team recommended updates for the DPDK community to focus more on low latency and stress testing, and to improve lockless implementations. These insights and improvements contributed to a more robust and efficient solution, ultimately enhancing the overall performance of the system.

Closing Remarks – Nathan Southern, Sr. Projects Coordinator, The Linux Foundation

The DPDK conference in Bangkok marked a significant milestone as the first APAC event since COVID-19. With a total of 63 participants, including 30 in person and 33 online, the event exceeded expectations. This conference, considered an experiment in a geopolitically neutral location, was deemed successful and has paved the way for potential future APAC events in various locations.

DPDK, a project now 14 years old, has shown remarkable growth and resilience, countering previous perceptions of being in its sunset phase. Since Nathan joined the Linux Foundation in April 2022, the project has maintained nearly perfect member retention and continued technological advancement. This longevity and sustained momentum underscore the project’s vitality and relevance in the tech community.

Strategic efforts in marketing and documentation have significantly enhanced the project’s visibility and usability. Under the direction of Ben Thomas, marketing initiatives have been robust, and tech writer Nini Purad has overhauled the project’s documentation. These efforts aim to foster community growth and engagement, ensuring that DPDK remains a valuable resource for its users.

The DPDK project is evolving in critical areas such as security, cloud hyperscaling, and AI. This evolution is driven by community input and the guidance of the tech board, including leaders like Thomas Monjalon. Continuous community involvement is essential for future advancements, highlighting the importance of active participation from all stakeholders.

Nathan emphasized the importance of community engagement in driving DPDK’s development forward. He encouraged attendees to participate through Slack, tech board calls, and contributions to the OSS code. Additionally, the project is actively creating dynamic content, including end-user stories and developer spotlights, to promote mutual growth and expand the membership base. This focus on community and content creation is key to sustaining and growing the DPDK project.

Watch all the summit videos here

DPDK Dispatch July

By Monthly Newsletter
1. Main Announcements
    • Last chance to register in-person or virtually for the DPDK APAC Summit July 9-10 register here
    • Apply to speak at the DPDK Summit 2024 Montreal by July 21
3. User Stories, Dev Spotlights 4. DPDK & Technologies in the news:
5. Performance Reports & Meeting Minutes This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org Thank you for your continued support and enthusiasm. DPDK Team.

The Journey of Jerin Jacob: From Embedded Linux Engineer to DPDK Leadership

By Community Spotlight

Jerin Jacob, a Senior Director at Marvell, is a pivotal maintainer in the DPDK community. With 20 years of experience, Jerin’s career began with Linux kernel development, laying the groundwork for his extensive contributions to high-performance networking. After joining Cavium, later acquired by Marvell, Jerin was tasked with supporting an open-source data plane framework on the OCTEON processor family, marking the beginning of his journey with DPDK.

A Natural Progression into Software Development

Jerin’s journey into software development began in his early childhood. He recalls playing card games and needing to emulate dice rolls. Using a bit of ingenuity and low-level programming, he turned a calculator into a makeshift dice emulator. This early exposure to basic programming concepts sparked his interest in computers during the early 1990s.

After completing his school years, Jerin pursued a diploma in Electronics and Communication, followed by a degree in Computer Science Engineering. During his diploma, he had access to Windows 3.1, where he started programming in BASIC and System programming. These initial experiences laid a strong foundation for his future endeavors.

In his engineering studies, Jerin transitioned to Linux kernel work, a significant shift from his earlier experiences with Windows 95. His first job involved moving Linux to embedded systems, a novel concept at the time. He worked on Multimedia SoCs focusing on  Linux architecture/SoC porting and peripherals drivers for PCI, USB and Storage. This period, encompassing about ten years of full Linux development, significantly shaped his skills and expertise.

After a decade in Linux development, Jerin moved to Cavium, a company known for its innovation in the semiconductor industry who specialized in ARM-based and MIPS-based network, video, and security, This opportunity marked a new era in his career as he delved into user-space data plane work, which required a different mindset. Optimizing for performance became paramount, and he honed new skills in system-wide knowledge, Cache architecture, Virtualization, SMMU (System Memory Management Unit), and writing optimized drivers. 

This skill set of balancing high performance with the flexibility to accommodate various vendor’s in driver subsystem development, set the stage for his future contributions to DPDK, where he would learn to create APIs that maintained performance while enabling contributions from multiple vendors to support vendor neutral APIs in DPDK.

The Technical Transition from Cavium to Marvell

Initially, when Cavium was in the data plane market, they had a SKU called OCTEON, which was primarily a proprietary SDK (Software Development Kit) based on MIPS architecture. To attract more customers and leverage open source activities, they decided to contribute to DPDK. Jerin led the Cavium/Marvell transition in this effort, adding ARM64 architecture support and specific hardware drivers.

Most of this work was done during his time at Cavium, including the initial ARM64 port. When Cavium’s acquisition by Marvell began, the focus shifted to fully integrating open-source contributions, moving away from proprietary SDKs. Marvell was instrumental in initiating this shift. In particular, Jerin’s manager at the time, Prasun Kapoor (now Assistant Vice President of Core Software, Infrastructure Processors at Marvell Technology), was pivotal in facilitating Jerin’s contributions to the DPDK community and the broader open source ecosystem.

Under Prasun’s guidance, Jerin was able to focus on integrating ARM64 support and specific hardware drivers into DPDK, transitioning from proprietary SDKs to fully open source contributions. This strategic shift has been supported by Marvell’s commitment to open-source innovation, a direction strongly advocated by Prasun at the time.

This strategic move allowed the team to build open-source accelerator drivers and contribute significantly to the DPDK community. This transition also marked the expansion of Jerin’s team, growing from a single person to around 50 contributors, significantly enhancing their collaborative efforts in high-performance networking. For instance, in Marvell’s current SKUs (Stock Keeping Units), Jerin achieved the capability of handling 105 million packets per second per core, a testament to his focus on performance optimization.

The acquisition of Cavium by Marvell, completed in July 2018,  was a strategic move to create a leading semiconductor company focused on infrastructure solutions. This merger combined Marvell’s expertise in storage controllers, networking solutions, and high-performance wireless connectivity with Cavium’s strengths in network, video, and security processors.

The Journey into DPDK

Being one of the first contributors to the DPDK project was both challenging and rewarding for Jerin Jacob. The community played a significant role in this journey, constantly providing feedback and support. When Jerin began working on adding ARM64 support, his initial task was to eliminate dependencies on x86 in the build process. This effort involved adding a new layer for ARM64 support, benefiting greatly from the input and guidance of other maintainers at that time.

The early DPDK community was small, comprising a few dedicated individuals. Jerin recalls working alongside Bruce Richardson, Thomas Monjalon, Konstantin Ananyev, Stephen Hemminger, and Anatoly Burakov. While Anatoly primarily focused on the memory subsystem, Bruce and Stephen were deeply involved in various aspects of the project. Konstantin Ananyev, an x86 maintainer, was also instrumental in helping Jerin navigate the intricacies of integrating ARM64 support, providing valuable insights on maintaining cross-architecture compatibility.

Technical Contributions

Jerin started contributing to DPDK by tackling the significant challenge of removing x86 build dependencies and introducing ARM 64-bit (ARM64) support. This groundbreaking effort involved optimizing numerous libraries for ARM-specific instructions, making DPDK versatile and robust across different hardware platforms.

One of Jerin’s major achievements includes the development of the Event Device subsystem, which abstracts work scheduling aspects of hardware. This subsystem has been widely adopted by companies like Ericsson, Intel, NXP, and Marvell, demonstrating its broad applicability and impact.

Jerin also authored the Regular Expression (RegEx) device class and ML (Machine Learning), enabling advanced pattern matching  and machine learning capabilities within DPDK. Furthermore, he developed the Graph Library, which enables graph-based packet processing, and the high-performance Trace Library, essential for performance monitoring and debugging. Both libraries have significantly enhanced DPDK’s capabilities.

In addition to his technical contributions, Jerin has been a respected maintainer in the DPDK community. He assists with maintaining various subsystems and sub-trees, collaborates with other major contributors, and represents Marvell on the technical board.

Work-Life Balance

Jerin Jacob places a high value on maintaining a balanced work-life dynamic, even while managing the demands of a high-paced career. One of his favorite ways to unwind and recharge is through travel. He typically plans solo trips once or twice a year, offering him the opportunity to explore new places and experiences. Additionally, he ensures to take at least one family trip annually, cherishing the moments spent with his loved ones and creating lasting memories.

Apart from traveling, Jerin is committed to lifelong learning. He dedicates time to expanding his knowledge in various fields, including Advanced Machine Learning (AML) and other emerging technologies. This continuous learning enhances his skill set and keeps him abreast of the latest advancements in his field.

Advice to New Developers Entering the Community

For new developers entering the community, Jerin suggests diving into the existing bug lists, starting with minor bugs or major ones depending on your comfort level. Fixing bugs, no matter the size, is an excellent way to familiarize yourself with the codebase and understand the project’s intricacies.

Another crucial area to focus on is improving the build system. This aspect is common to all contributors and offers a manageable way to get involved without feeling overwhelmed. Start with lightweight tasks that you can handle comfortably. This approach helps you gain confidence and learn the workflows and standards of the community.

Once you feel comfortable and have gained some recognition within the community, gradually move on to contributing to subsystems and higher-level aspects of the project. These areas require more time and more profound knowledge but offer significant learning opportunities and a chance to make a substantial impact.

The Importance of DPDK Summit Events and In-Person Interactions

Jerin’s experience with DPDK events has been instrumental in shaping his contributions to the community. He has attended almost all the European Summits, which serve as vital platforms for maintainers and tech leaders to communicate and collaborate. These summits provide an opportunity for in-person discussions, which are invaluable for exchanging ideas and resolving issues more effectively than through mailing lists alone. 

Reflecting on his first summit, Jerin recalls the excitement and the significant difference it made to meet people in person after years of communication through mailing lists. The initial years, around a decade ago, did not feature many summits, or he was not in a position to travel. The shift from purely online interactions to face-to-face meetings brought a new level of understanding and collaboration. The feedback and discussions that occurred in person were far more proactive and productive, helping to build a sense of camaraderie and mutual understanding.

Meeting his peers in person allowed Jerin to understand their perspectives and work more seamlessly with them. He highlights that once you know people personally, it becomes easier to align proposals and projects to meet their expectations and avoid potential conflicts. This personal interaction helps in anticipating how someone might react to a new idea, enabling a more strategic approach to collaboration.

DPDK Maintainers as a Band of Musicians

If DPDK maintainers were a band, each member would play a unique and vital role, similar to musicians in an orchestra or a rock band. In this analogy, Jerin sees himself as the lyricist. The lyricist’s role is crucial as it involves creating the first stage of the song, defining how it needs to be structured, and setting the tone and direction for the rest of the band. This is akin to Jerin’s contributions to DPDK, where he defines high-level designs and APIs, laying the groundwork for others to build upon.

As a lyricist, Jerin focuses on the initial conceptualization and strategic planning of the project. He provides the foundational elements and guidelines that others follow to ensure the project progresses smoothly and coherently. Just as a song starts with lyrics that give it meaning and direction, Jerin’s work ensures that the project’s core components are well-defined and robust.

While Jerin primarily identifies with the role of the lyricist, he acknowledges that maintainers can take on multiple roles within the DPDK “band.” However, due to his current responsibilities, he is more focused on the planning and strategic aspects rather than individual contributions. He likens this to writing the lyrics rather than performing on stage.

Technology Jerin Could Not Live Without

For Jerin, mobile devices and computers are indispensable. These technologies are not only integral to his daily life but also form the backbone of his professional work. Mobile devices, in particular, offer the flexibility and connectivity that keep him engaged and productive, no matter where he is. They enable seamless communication, instant access to information, and the ability to manage various tasks on the go.

Computers, on the other hand, are essential for more intensive computing tasks, development work, and large-scale projects. They provide the robust capabilities needed for coding, debugging, and running complex simulations. Jerin relies heavily on these tools to execute his work efficiently and effectively.

In addition to these fundamental technologies, Jerin is continuously learning and adapting to new advancements. He is particularly interested in Artificial Intelligence and Machine Learning (AI/ML). By exploring how AI/ML can be leveraged, Jerin aims to offload mundane and repetitive tasks, allowing people to focus on more critical and creative aspects of their work. This approach not only enhances productivity but also fosters innovation.

The Future of DPDK and the Impact of AI

Jerin envisions a future where many routine and repetitive tasks in software development are offloaded to artificial intelligence (AI). This includes writing unit test cases, ensuring proper git commits, checking the sanity of code, and refactoring. AI’s role will be to handle these mundane yet essential tasks, allowing developers to focus on innovative and complex aspects of development.

He believes that once the initial ideas and high-level design are defined, AI can significantly accelerate the development process. While AI can take over tasks that follow set patterns and rules, such as generating unit test cases or identifying code regressions, the core implementation and performance-critical coding will still require human expertise. This is because the nuanced understanding of performance optimization is something that AI cannot fully replicate yet.

By automating repetitive tasks, AI can reduce the workload for maintainers. For instance, when a new patch is submitted, AI can review the code for basic sanity checks, allowing human maintainers to concentrate on more complex reviews and implementation details. This synergy between AI and human developers can lead to more efficient and faster development cycles. Jerin sees AI playing a crucial role in integrating new technologies and developing new libraries.

The Convergence of AI and Emerging Technologies in DPDK’s Future

Jerin envisions a transformative future for the Data Plane Development Kit (DPDK) project, driven by the convergence of AI, IoT, decentralized infrastructure, cloud computing, 5G, and other emerging technologies. This integration will significantly influence the direction and development of DPDK.

Expanding Beyond Drivers

Initially, DPDK focused primarily on driver development. However, the project’s scope is now expanding to include protocol aspects. For instance, Marvell has recently upstreamed support for security protocols like PDCP and TLS. The DPDK graph library is setting the stage for further protocol support, which will extend the project’s capabilities beyond simple driver APIs to include hardware-accelerated protocol development.

Workload-Driven Accelerators

In the past, general-purpose CPUs were sufficient for most tasks. However, the landscape is shifting towards workload-driven accelerators. Today’s approach involves understanding specific workloads and optimizing for them, rather than developing generic solutions. This top-down methodology means that companies, particularly hyperscalers like Google, are designing chips tailored to their specific workloads. This shift from general-purpose to workload-optimized accelerators is a key trend that DPDK is aligning with.

Offloading to Accelerators

Jerin sees a future where various tasks are offloaded to specialized accelerators. Starting from packet processing, the evolution includes offloading complete security, AI, and machine learning workloads. This shift involves using data processing units (DPUs) or other specialized processors (XPUs) to handle intensive tasks, allowing the main host CPU to focus on orchestrating these processes rather than executing them.

The Role of AI in Development

AI will play a crucial role in automating many routine tasks within DPDK development, such as writing unit test cases, ensuring proper git commits, checking code sanity, and refactoring. By offloading these repetitive tasks to AI, developers can focus on more complex and innovative aspects of development. While AI will handle the repetitive and structured tasks, human developers will continue to drive the implementation of new ideas and performance-critical code.

Future Integration and Development

As AI and blockchain technologies become more integrated, DPDK will adapt to support these advancements. The project will continue to evolve, focusing on enabling seamless integration and efficient execution of these emerging technologies within the DPDK framework. This includes optimizing accelerators for specific workloads, ensuring that the host CPU delegates rather than performs intensive tasks.

Overall, Jerin Jacob’s vision for DPDK involves a continuous evolution towards supporting more complex protocols and workloads, driven by the convergence of AI and other emerging technologies. This will enable DPDK to remain at the forefront of technological innovation and performance optimization.

Reflections on Open Source and DPDK

Jerin has been a significant contributor to both the Linux kernel and the DPDK (Data Plane Development Kit) projects. Reflecting on his journey, he shares his thoughts on the impact of open-source contributions to his career and personal growth.

Jerin finds great enjoyment in contributing to open source projects. This joy stems not only from the act of coding, but also from engaging with the community. “You can learn a lot from the community and how other people think about a given technical problem in different ways,” he explains. The collaborative nature of open source fosters an environment where diverse perspectives come together, leading to innovative solutions and continuous learning.

Contributing to DPDK has played a crucial role in Jerin’s personal and career development. He acknowledges the platform as a stepping stone in his corporate journey. “It’s helped me in a lot of personal growth as well,” Jerin states, emphasizing how open source contributions have bolstered his professional advancement. By proving his skills and consistently contributing to high-impact projects, Jerin has been able to climb the corporate ladder effectively.

Jerin likens his experience with DPDK to building a staircase. Each contribution is a stone laid down, providing a foundation for the next. “You prove something, and it’s like one stepping stone. With that, we put another stone, and you can just build on that,” he says. This iterative process of contributing, learning, and growing has been a rewarding and empowering journey for Jerin.

In summary, Jerin’s reflections highlight the impact that contributing to open source projects like DPDK can have on an individual’s career and personal development. It is a testament to the value of open-source communities in fostering growth, collaboration, and innovation.

Learn more about how you can contribute here: https://www.dpdk.org/contribute/

DPDK Dispatch June

By Monthly Newsletter

1. Main Announcements

3. User Stories, Dev Spotlights

  • Submit a blog here
  • Submit a developer spotlight here

4. DPDK & Technologies in the news:

5. Performance Reports & Meeting Minutes

This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org

Thank you for your continued support and enthusiasm.

DPDK Team.

Microsoft Azure Mana DPDK Q&A

By Blog

In today’s rapidly evolving digital landscape, the demand for high-speed, reliable, and scalable network solutions is greater than ever. Enterprises are constantly seeking ways to optimize their network performance to handle increasingly complex workloads. The integration of the Data Plane Development Kit (DPDK) with Microsoft Azure’s Network Adapter (MANA) is a groundbreaking development in this domain.

Building on our recent user story, “Unleashing Network Performance with Microsoft Azure MANA and DPDK,” this blog post delves deeper into how this integration is revolutionizing network performance for virtual machines on Azure. DPDK’s high-performance packet processing capabilities, combined with MANA’s advanced hardware offloading and acceleration features, enable users to achieve unprecedented levels of throughput and reliability.

In this technical Q&A, Brian Denton Senior Program Manager, at Microsoft Azure Core further illuminates the technical intricacies of DPDK and MANA, including the specific optimizations implemented to ensure seamless compatibility and high performance. He also elaborates on the tools and processes provided by Microsoft to help developers leverage this powerful integration, simplifying the deployment of network functions virtualization (NFV) and other network-centric applications.

1. How does Microsoft’s MANA integrate with DPDK to enhance the packet processing capabilities of virtual machines on Azure, and what specific optimizations are implemented to ensure compatibility and high performance?

[Brian]: MANA is a critical part of our hardware offloading and acceleration effort. The end goal is to maximize workloads in hardware and minimize the host resources needed to service virtual machines. Network Virtual Appliance (NVA) partner products and large customers leverage DPDK to achieve the highest possible network performance in Azure. We are working closely with these partners and customers to ensure their products and services take advantage of DPDK on our new hardware platforms.

2. In what ways does the integration of DPDK with Microsoft’s Azure services improve the scalability and efficiency of network-intensive applications, and what are the measurable impacts on latency and throughput?

[Brian]: Network Virtual Appliances are choke points in customers networks and are often chained together to protect, deliver, and scale applications. Every application in the network path adds processing and latency between the endpoints communicating. Therefore, NVA products are heavily focused on speeds and feeds and designed to be as close to wire-speed as possible. DPDK is the primary tool used by firewalls, WAF, routers, Application Delivery Controllers (ADC), and other networking applications to reduce the impact of their products on network latency. In a virtualized environment, this becomes even more critical.

3. What tools and processes has Microsoft provided for developers to leverage DPDK within the Azure ecosystem, and how does this integration simplify the deployment of network functions virtualization (NFV) and other network-centric applications? 

[Brian]: We provide documentation on running testpmd in Azure: https://aka.ms/manadpdk. Most NVA products are on older LTS Linux kernels and require backporting kernel drivers, so having a working starting point is crucial for integrating DPDK application with new Azure hardware.

4. How does DPDK integrate with the MANA hardware and software, especially considering the need for stable forward-compatible device drivers in Windows and Linux?

[Brian]: The push for hardware acceleration in a virtualized environment comes with the drawback that I/O devices are exposed to the virtual machine guests through SR-IOV. Introducing the next generation of network card often requires the adoption of new network drivers in the guest. For DPDK, this depends on the Linux kernel which may not have drivers available for new hardware, especially in older long-term support versions of Linux distros. Our goal with the MANA driver is to have a common, long-lived driver interface that will be compatible with future networking hardware in Azure. This means that DPDK applications will be forward-compatible and long-lived in Azure.

5. What steps were taken to ensure DPDK’s compatibility with both Mellanox and MANA NICs in Azure environments?

[Brian]: We introduced SR-IOV through Accelerated Networking early 2018 with the Mellanox ConnectX-3 card. Since then, we’ve added ConnectX-4 Lx, ConnectX-5, and now the Microsoft Azure Network Adapter (MANA). All these network cards still exist in the Azure fleet, and we will continue to support DPDK products leveraging Azure hardware. The introduction of new hardware does not impact the functionality of prior generations of hardware, so it’s a matter of ensuring new hardware and drivers are supported and tested prior to release.

6. How does DPDK contribute to the optimization of TCP/IP performance and VM network throughput in Azure?

[Brian]: See answer to #2. DPDK is necessary to maximize network performance for applications in Azure, especially for latency sensitive applications and heavy network processing.

7. How does DPDK interact with different operating systems supported by Azure MANA, particularly with the requirement of updating kernels in Linux distros for RDMA/InfiniBand support?

[Brian]: DPDK applications require a combination of supported kernel and user space drivers including both Ethernet and RDMA/InfiniBand. Therefore, the underlying Linux kernel must include MANA drivers to support DPDK. The latest versions of Red Hat and Ubuntu support both the Ethernet and InfiniBand Linux kernel drivers required for DPDK.

8. Can you provide some examples or case studies of real-world deployments where DPDK has been used effectively with Azure MANA?

[Brian]: DPDK applications in Azure are primarily firewall, network security, routing, and ADC products provided by our third-party Network Virtual Appliance (NVA) partners through the Marketplace.  With our most recent Azure Boost preview running on MANA, we’ve seen additional interest by some of our large customers in integrating DPDK into their own proprietary services.

9. How do users typically manage the balance between using the hypervisor’s virtual switch and DPDK for network connectivity in scenarios where the operating system doesn’t support MANA?

[Brian]: In the case where the guest does not have the appropriate network drivers for the VF, the netvsc driver will automatically forward traffic to the software vmbus. The DPDK application developer needs to ensure that they support the netvsc PMD to make this work.

10.What future enhancements or features are being considered for DPDK in the context of Azure MANA, especially with ongoing updates and improvements in Azure’s cloud networking technology?

[Brian]: The supported feature list is published in the DPDK documentation: 1. Overview of Networking Drivers — Data Plane Development Kit 24.03.0-rc4 documentation (dpdk.org). We will release with the current set of features and get feedback from partners and customers on demand for any new features.

11. How does Microsoft plan to address the evolving needs of network performance and scalability in Azure with the continued development of DPDK and MANA?

[Brian]: We are focused on hardware acceleration to drive the future performance and scalability in Azure. DPDK is critical for the most demanding networking customers and we will continue to ensure that it’s supported on the next generations of hardware in Azure.

12. How does Microsoft support the community and provide documentation regarding the use of DPDK with Azure MANA, especially for new users or those transitioning from other systems?

[Brian]: Feature documentation is generated out of the codebase and results in the following:

Documentation for MANA DPDK, including running testpmd, can be found here: https://aka.ms/manadpdk

13. Are there specific resources or training modules that focus on the effective use of DPDK in Azure MANA environments?

[Brian]: We do not have specific training resources for customers to use DPDK in Azure, but that’s a good idea. Typically, DPDK is used by key partners and large customers that work directly with our development teams.

14. Will MANA provide functionality for starting and stopping queues?

[Brian]: TBD. What’s the use case and have you seen a need for this? Customers will be able to change the number of queues, but I will have to find out whether they can be stopped/started individually.

15. Is live configuration of Receive Side Scaling (RSS) possible with MANA?

[Brian]: Yes. RSS is supported by MANA.

16. Does MANA support jumbo frames?

[Brian]: Jumbo frames and MTU size tuning are available as of DPDK 24.03 and rdma-core v49.1

17. Will Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) be enabled with MANA?

[Brian]: LRO in hardware (also referred to as Receive Segment Coalescing) is not supported (software should work fine)

18. Are there specific flow offloads that MANA will implement? If so, which ones?

[Brian]: MANA does not initially support DPDK flows. We will evaluate the need as customers request it.

19. How is low migration downtime achieved with DPDK?

[Brian]: This is a matter of reducing the amount of downtime during servicing events and supporting hotplugging. Applications will need to implement the netvsc PMD to service traffic while the VF is revoked and fall back to the synthetic vmbus.

20. How will you ensure feature parity with mlx4/mlx5, which support a broader range of features?

[Brian]: Mellanox creates network cards for a broad customer base that includes all the major public cloud platforms as well as retail.  Microsoft does not sell the MANA NIC to retail customers and does not have to support features that are not relevant to Azure. One of the primary benefits of MANA is we can keep functionality specific to the needs of Azure and iterate quickly.

21. Is it possible to select which NIC is used in the VM (MANA or mlx), and for how long will mlx support be available?

[Brian]: No, you will never see both MANA and Mellanox NICs on the same VM instance. Additionally, when a VM is allocated (started) it will select a node from a pool of hardware configurations available for that VM size. Depending on the VM size, you could get allocated on ConnectX-3, ConnectX-4 Lx, ConnectX-5, or eventually MANA. VMs will need to support mlx4, mlx5, and mana drivers till hardware is retired from the fleet to ensure they are compatible with Accelerated Networking.

22. Will there be support for Windows and FreeBSD with DPDK for MANA?

[Brian]: There are currently no plans to support DPDK on Windows or FreeBSD. However, there is interest within Microsoft to run DPDK on Windows.

23. What applications are running on the SoC?

[Brian]: The SoC is used for hardware offloading of host agents that were formerly ran in software on the host and hypervisor. This ultimately frees up memory and CPU resources from the host that can be utilized for VMs and reduces impact of neighbor noise, jitter, and blackout times for servicing events.  

24. What applications are running on the FPGA?

[Brian]: This is initially restricted to I/O hardware acceleration such as RDMA, the MANA NIC, as well as host-side security features.

Read the full user story ‘Unleashing Network Performance with Microsoft Azure MANA and DPDK’

Cache Awareness in DPDK Mempool

By Blog

Author: Kamalakshitha Aligeri – Senior Software Engineer at Arm

The objective of DPDK is to accelerate packet processing by transferring the packets from the NIC  to the application directly, bypassing the kernel. The performance of DPDK relies on various factors such as memory access latency, I/O throughput, CPU performance, etc.

Efficient packet processing relies on ensuring that packets are readily accessible in the hardware  cache. Additionally, since the memory access latency of the cache is small, the packet processing  performance increases if more packets can fit into the hardware cache. Therefore, it is important  to know how the packet buffers are allocated in hardware cache and how it can be utilized to get  the maximum performance. 

With the default buffer size in DPDK, hardware cache is utilized to its full capacity, but it is not  clear if this is being done intentionally. Therefore, this blog helps in understanding how the  buffer size can have an impact on the performance and things to remember when changing the  default buffer size in DPDK in future. 

In this blog, I will describe, 

1. Problem with contiguous buffers 

2. Allocation of buffers with cache awareness 

3. Cache awareness in DPDK mempool 

4. l3fwd performance results with and without cache awareness 

Problem with contiguous buffers 

The mempool in DPDK is created from a large chunk of contiguous memory. The packets from  the network are stored in packet buffers of fixed size (objects in mempool). The problem with  contiguous buffers is when the CPU accesses only a portion of the buffer, such as in cases like  DPDK’s L3 forwarding application where only metadata and packet headers are accessed. Rest of  the buffer is not brought into the cache. This results in inefficient cache utilization. To gain a better  understanding of this problem, its essential to understand how the buffers are allocated in hardware  cache. 

How are buffers mapped in Hardware Cache? 

Consider a 1KB, 4-way set-associative cache with 64 bytes cache line size. The total number of  cache lines would be 1KB/64B = 16. For a 4-way cache, each set will have 4 cache lines. Therefore, there will be a total of 16/4 = 4 sets. 

As shown in Figure1, each memory address is divided into three parts: tag, set and offset. 

• The offset bits specify the position of a byte within a cache line (Since each cache line is  64 bytes, 6 bits are needed to select a byte in a single cache line). 

• The set bits determine which set the cache line belongs to (2 bits are needed to identify the  set among 4 ways).

• The tag bits uniquely identify the memory block. Once the set is identified with set bits,  the tag bits of the 4 ways in that set is compared against the tag bits of the memory address,  to check if the address is already present in the cache. 

Figure 1 Memory Address 

In Figure 2, each square represents a cache line of 64 bytes. Each row represents a set. Since it’s a  4-way cache, each set contains 4 cache lines in it – C0 to C3. 

Figure 2 Hardware Cache 

Let’s consider a memory area that can used to create a pool of buffers. Each buffer is 128 bytes,  hence occupies 2 cache lines. Assuming the first buffer address starts at 0x0, the addresses of the  buffers are as shown below. 

Figure 3 Contiguous buffers in memory

In the above figure the offset bits are highlighted in orange, set bits in green and tag bits in blue. Consider buffer 1’s address, where set bits “00” means the buffer maps to set0. Assuming initially  all the sets are empty, buffer 1 occupies the first cache line of 2 contiguous sets. 

Since buffer 1 address is 0x0 and the cache line size is 64 bytes, the first 64 bytes of the buffer  occupy the cache line in set0. For the next 64 bytes, the address becomes 0x40 (0b01000000) indicating set1 because the set bits are “01”. As a result, the last 64 bytes of the buffer occupy the  cache line in set1. Thus, the buffer is mapped into cache lines (S0, C0) and (S1, C0). 

Figure 4 Hardware cache with buffer 1 

Similarly, buffer 2 will occupy the first cache line of next two sets (S2, C0) and (S3, C0).

Figure 5 Hardware cache with 2 buffers 

The set bits in buffer 3 address “00” show that the buffer 3 maps to set 0 again. Since the first  cache line of set0 and set1 is occupied, buffer 3 occupies second cache line of set 0 and 1 (S0, C1)  and (S1, C1). 

Figure 6 Hardware cache with 3 buffers 

Similarly buffer 4 occupies the second cache-line of sets 2 and 3 and so on. Each buffer is  represented with a different color and a total of 8 buffers can occupy the hardware cache without  any evictions.  

Figure 7 Allocation of buffers in hardware cache 

Although the buffer size is 128 bytes, CPU might not access all the bytes. For example, for 64 bytes packets, only the first 64 bytes of the buffer are consumed by the CPU (i.e. one cache line  worth of data).  

Since the buffers are two cache lines long, and are contiguous, and only the first 64 bytes of each  buffer is accessed, only sets 0 and sets 2 are populated with data. Sets 1 and 3 go unused (unused  sets are shown with pattern in Figure 8).

Figure 8 Unused sets in hardware cache 

When buffer 9 needs to be cached, it maps to set 0 since set bits are “00”. Considering a LRU  replacement policy, the least recently used cache line of 4 ways (buffer 1, 3, 5 or 7) in set0 will be  evicted to accommodate buffer 9 even though set 1 and set 3 are empty. 

This is highly inefficient, as we are not utilizing the cache capacity to the full.  

Solution – Allocation of buffers with Cache awareness 

In the above example, if the ununsed cache sets can be utilized to allocate the subsequent buffers (buffers 9 – 16), we would utilize the cache in a more efficient manner. 

To accomplish this, the memory addresses of the buffers can be manipulated during the creation  of mempool. This can be achieved by inserting one cache line padding after every 8 buffers,  effectively aligning the buffer addresses in a way that utilizes the cache more efficiently. Let’s take the above example of contiguous buffer addresses and then compare it with same buffers  but with cache line padding. 

Figure 9 Without cache lines padding Figure 10 With cache lines padding

From figure 9 and 10, we can see that the buffer 9 address has changed from 0x400 to 0x440. With 0x440 address, the buffer 9 maps to set1. So, there is no need to evict any cache line from set0 and  we are utilizing the unused cache set 1. 

Similarly, buffer 10 maps to set3 instead of set2 and so on. This way buffer 9 to buffer 16, can  occupy the sets1 and 3 that are unused by buffers1 to 8. 

Figure 11 Hardware cache with cache awareness 

This approach effectively distributes the allocation of buffers to better utilize the hardware cache. Since for 64-byte packets, only the first cache line of each buffer contains useful data, we are  effectively utilizing the hardware cache capacity by accommodating useful packet data from 16  buffers instead of 8. This doubles the cache utilization, enhancing the overall performance of the  system. 

Padding of cache lines is necessary primarily when the cache size is exactly divisible by the buffer  size (which means buffer size is a power of 2). In cases where the buffer size does not divide  evenly into the cache size, part of the buffer is left unmapped. This residual portion effectively  introduces an offset like the one achieved through padding. 

Cache Awareness in DPDK Mempool 

In DPDK mempool, each buffer typically has a size of 2368 bytes and consists of several distinct  fields – header, object and trailer. Let’s look at each one of them.

Figure 13 Mempool buffer fields 

Header: This portion of the buffer contains metadata and control information needed by DPDK to  manage buffer efficiently. It includes information such as buffer length, buffer state or type and  helps to iterate on mempool objects. The size of the object header is 64 bytes. Object: This section contains actual payload or data. Within the object section, there are additional  fileds such as mbuf, headroom and packet data. The mbuf of 128 bytes contains metadata such as  message type, offset to start of the packet data and pointer to additional mbuf structures. Then  there is a headroom of 128 bytes. The packet data is 2048 bytes that contains packet headers and  payload. 

Trailer: The object trailer is 0 bytes, but a cookie of 8 bytes is added in debug mode. This cookie acts as a marker to prevent corruptions. 

With a buffer size of 2368 bytes (not a power of 2), the buffers are inherently aligned with cache  awareness without the need for cache line padding. In other words, the buffer size is such that it  optimizes cache utilization without the need for additional padding. 

The buffer size of 2368 bytes does not include the padding added to distribute buffers across  memory channels. 

To prove how the performance can vary with a buffer size that is power of 2, I ran an experiment  with 2048 buffer size and compared it against the default buffer size of mempool in DPDK. In the experiment 8192 buffers are allocated in the mempool and a histogram of cache sets for all  the buffers was plotted. The histogram illustrates the number buffers allocated in each cache set. 

Figure 14 Histogram of buffers – 2048 bytes 

With a buffer size of 2048 bytes, the same sets in the hardware cache are hit repeatedly, whereas  other sets are not utilized (we can see that from the gaps in the histogram) 

Figure 15 Histogram of buffers – 2368 bytes

With a buffer size of 2368 bytes, each set is being accessed only around 400 times. There are no  gaps in the above histogram, indicating that the cache is being utilized efficiently. 

DPDK l3fwd Performance 

The improved cache utilization observed in the histogram, attributed to cache awareness, is further  corroborated by the throughput numbers of the l3fwd application. The application is run on a  system with 64KB 4-way set associative cache. 

Below chart shows the throughput in MPPS for single core l3fwd test with 2048 and 2368 buffer  sizes 

Figure 16 l3fwd throughput comparison

There is a 17% performance increase with the 2368 buffer size. 

Conclusion 

Contiguous buffer allocation in memory with cache awareness enhances performance by  minimizing cache evictions and maximizing hardware cache utilization. In scenarios where the  buffer size is exactly divisible by the cache size (e.g., 2048 bytes), padding cache lines creates a offset in the memory addresses and better distribution of buffers in the cache. This led to a 17%  increase in performance for DPDK l3fwd application. 

However, with buffer sizes not precisely divisible by the cache size, as is the default in DPDK,  padding of cache lines already occurs because of the offset in the buffer addresses, resulting in an improved performance. 

For more information visit the programmers guide

Tracing Ciara Power’s Path: A Leap from Mathematics to DPDK Expertise at Intel

By Community Spotlight

Welcome to the latest installment of our DPDK Developer Spotlight series, where we share the unique journeys and insights of those who contribute to the DPDK community. This edition highlights Ciara Power, a former Technical Lead and Network Software Engineer at Intel. We explore her path into open source development from a math enthusiast at school to a software developer shaping the future of DPDK.

Early Life and Education

A Mathematical Foundation

Ciara’s pathway into the world of computer science and programming was not straightforward. Initially grounded in mathematics, her educational journey began in an environment where technical subjects were rarely emphasized, particularly at an all-girls school in Ireland, that did not prioritize technological advancements. Despite this, Ciara’s inherent love for math led her to pursue it at the university level. 

Discovering Programming

While pursuing her studies at the University of Limerick, Ciara encountered a pivotal moment—a chance to explore programming through an introductory taster course subject. This opportunity resonated with a piece of advice she had received from her mother since childhood: she was destined to be a programmer. 

Transitioning to Computer Science 

A Turning Point

This insight from her mother proved to be more than mere encouragement; it was a recognition of Ciara’s innate abilities and potential for finding joy and fulfillment in a realm she had yet to explore. Indeed, this was a powerful testament to the foresight and intuition that mothers often have about their children’s hidden talents like they say, ‘Mother knows best’’.

After finishing the programming subject course, Ciara reached a turning point. The practical aspects of problem solving appealed to her more than theoretical mathematics. Driven by this preference, and after several challenging weeks, she decided to exit the mathematics course. That September, she took a notable step by starting a computer science course at the Waterford Institute of Technology.

The first year of her computer science studies confirmed her decision; she thrived in this environment, where she could apply logical thinking to tangible problems. The satisfaction of crafting solutions and the joy of creative exploration grounded her. 

Balancing Hobbies and Career

A Blend of Technical and Artistic Talents

Ciara’s enthusiasm for her studies crossed over into other areas of her life, enriching her creative pursuits. From painting and drawing to woodworking and knitting, she embraced a wide array of hobbies, each providing a different outlet for her creative expression. This blend of technical skill and artistic talent became a defining feature of her approach to both work and leisure. 

Ciara’s engagement with her various hobbies provides a crucial balance and unique perspective that enhances her programming work: the ability to visualize the broader picture before delving into details. Just as a painter steps back to view the whole canvas, Ciara applies a similar approach in her coding practices. This allows her to assess a project from various angles. 

Her method of drawing diagrams on a whiteboard is emblematic of her systematic approach to problem-solving, juxtaposed with her ability to incubate ideas and contemplate them from different perspectives. 

This blend of logic and creativity marks her programming style, making her adept at tackling complex problems with innovative solutions. Her ability to think outside the box and not get overly absorbed in minutiae gives her an edge, making her work both methodical and inspired.

Moreover, these pursuits offer Ciara a form of catharsis, a way to decompress and process information subconsciously, which in turn feeds into her professional work. 

Her dual approach—systematic yet open to creative leaps—illustrates how her hobbies not only complement but actively enhance her capabilities as a programmer. This synergy between her personal interests and professional skills exemplifies how diverse experiences can contribute to professional excellence in technology and programming.

Professional Development at Intel

Internship and Real-World Experience

Ciara’s transition from academia to the practical, fast-paced world of software development provided her with an invaluable perspective that she would carry throughout her career. Her internship with the DPDK team at Intel in Shannon, Ireland, was not just about gaining professional experience; it was a deep dive into the collaborative and iterative processes of real-world technology development.

Challenges and Adaption

During her eight-month placement, Ciara engaged directly with complex projects that were far more advanced than her college assignments. This experience was crucial for her; it wasn’t just about coding but also about understanding how large-scale software development projects function, how teams interact, and how products evolve from a concept to a market-ready entity.

One significant challenge was her initial foray into the open source community through DPDK. Coming from an academic background where open source wasn’t a focus, the learning curve was steep. 

She had to quickly adapt to the open source ethos of sharing, collaborative open development, and the transparent critique of code. Learning to navigate and contribute to discussions on mailing lists, where she interacted with developers of varying seniority from around the world, was initially daunting.

As a newcomer, she was initially anxious about how she might be received, given the prevalent challenges women often face in tech environments. However, her experience was overwhelmingly positive. From the onset, she was treated with the same respect and consideration as any seasoned developer. This egalitarian approach was not only affirming but also empowering.

To ingratiate herself within the DPDK community, Ciara adopted a humble approach to learning and contributing. She began by actively listening and understanding the community dynamics before making her contributions. 

Reviewing others’ code and providing constructive feedback became a routine that not only helped her understand the nuances of professional coding but also built her reputation as a thoughtful and capable developer. This proactive engagement helped her transition from an intern at Intel to a respected member of the community.

Projects and Technical Accomplishments

Ciara’s technical journey with DPDK deepened significantly, largely due to the interactions and guidance from OG maintainers Bruce Richardson (Network Software Engineer at Intel Corporation) and Akhil Goyal (Principal Engineer at Marvell Semiconductor). 

Her first major project was contributing to the development of the Telemetry Library V1 a library for retrieving information and statistics about various other DPDK libraries through socket client connections. This not only honed her technical skills but also gave her a solid understanding of handling community feedback for large patchsets, with plenty of discussion around how to implement the library.

In terms of her main contributions, Ciara refactored the unit test framework, adding support for nested testsuites. This included reworking the cryptodev autotests to make use of nested testsuites and ensure all testcases are counted individually in test summaries. This, in turn, improved the testing experience for the user, making it easier to see which testcases are passing/failing [0].

She was also Involved in various improvements for Intel IPsec-mb SW PMDs, including combining PMDs to use common shared code [1], adding multiprocess support [2], and adding Scatter-Gather List support [3] [3.1]

Ciara also worked on removing Make build system from DPDK. Meson had been introduced a few releases prior, so it was time to completely remove the old build system, with help from many others. A huge task, it touched on nearly every document, library and driver. This involved significant collaboration in the community, with plenty of reviews and testing taking place by other developers and maintainers. [3].

She Added an API and commandline argument to set the max SIMD bitwidth for EAL. Previously, a number of components in DPDK had optional AVX-512 or other vector paths which can be selected at runtime by each component using its own decision mechanism. This work added a single setting to control what code paths are used. This can be used to enable some non-default code paths e.g. ones using AVX-512, but also to limit the code paths to certain vector widths, or

to scalar code only, which is useful for testing. [4]

Additionally Ciara Improved the cryptodev library Asymmetric session usage, by hiding the structure in an internal header, and using a single mempool rather than using pointers to private data elsewhere [4]. She also Enabled numerous QAT devices and algorithms, including most recently, new GEN3 and GEN5 devices [5].

Bug Fixing

Ciara’s proactive engagement led her to work on fixing various bugs. By utilizing bug detection tools like Address Sanitiser and Coverity, she debugged and resolved a wide range of bugs. This process was not just about resolving immediate issues; it also helped her build a deeper understanding of better programming practices that could be applied in future feature development.  

By contributing significant patches and actively participating in community discussions, Ciara received encouragement instead of the skepticism or condescension often found in other communities. This supportive atmosphere helped her quickly find her footing and gain confidence in her abilities. Her contributions were evaluated solely on their merit, reflecting the DPDK community’s commitment to contributor diversity.

Community Engagement and Recognition

Active participation and support 

Throughout her journey, the open source community, particularly her interactions on the DPDK forums and mailing lists, played a crucial role. Under the guidance of Bruce Richardson, Pablo de Lara Guarch and Akhil Goyal, Ciara not only contributed significantly but also gained insights that helped shape her technical and strategic acumen. 

This exposure allowed her to understand diverse perspectives and collaborative methods essential for open development and open governance across technical communities.

Major Accomplishments

Reflecting on her significant milestones with DPDK, Ciara highlights two major accomplishments. During her internship at Intel, she contributed to the development of the Telemetry Library V1, a library for retrieving information and statistics about various other DPDK libraries through socket client connections. 

Upon returning as a graduate, she was entrusted with the complete rewrite of this library, leading to the development of Telemetry V2. This task demonstrated her progression as a developer, showcasing her ability to significantly improve and build upon her earlier work within a relatively short span of time. 

Her involvement in developing this library was a significant learning journey, filled with complex challenges and intensive problem-solving that required her to engage deeply with the technology and the DPDK community. 

The Telemetry library project stood out not only for its technical demands but also for the collaborative effort it required. Ciara navigated through numerous technical discussions, debates, and feedback loops, integrating community insights to implement and enhance the robustness of the code. 

Another notable highlight was her handling of large patch sets. These weren’t monumental in features but were substantial in scope and impact, involving critical enhancements and fixes that improved DPDK’s functionality and reliability.

Valued advice and the Importance of Code Reviews

One of the most impactful pieces of advice Ciara received from the DPDK community centered on the importance of code reviews. Embracing this practice not only honed her technical skills but also cultivated a mindset geared towards continuous improvement and collaboration. 

This advice underscored the necessity of meticulously reviewing her own code as well as that of others, which facilitated a deeper understanding of various coding approaches and strategies.

Ciara learned that taking a step back to scrutinize every detail of her work from a broader design perspective was crucial. This approach allowed her to explore alternative solutions and methodologies that might not be immediately apparent. 

Engaging in thorough reviews helped her identify potential issues before they escalated, enhancing the overall quality and reliability of her contributions.

Personal Achievement and Awards

Ciara has been recognized multiple times for her contributions at Intel, underscoring her influence and impact within the tech giant. One of her notable accolades includes the Intel Women’s Achievement Award 2021, a testament to her substantial and measurable impact on Intel’s business, profitability, and reputation. 

This award is particularly significant as it celebrates individuals who not only excel in their roles but also drive meaningful change across the organization.

In addition to this, Ciara has received multiple Intel Recognition Awards. These commendations highlight her exceptional development work and her proactive approach to risk management, which has helped prevent bottlenecks in community projects. 

Her efforts around major patch sets during this period were instrumental in her winning these awards. They were not just routine contributions but were pivotal in enhancing Intel’s technological frameworks. 

DPDK Events and the Importance of In-Person Collaboration

Ciara’s experiences at DPDK events provide an illustration of her integration and active participation in the community. After completing her internship at Intel, Ciara attended the DPDK Summit as a participant, not as a speaker. 

This event was particularly significant as it occurred shortly after she returned to college in September, marking her first engagement with the community outside of a professional capacity.

During the summit, Ciara experienced the surreal yet affirming moment of connecting faces to the names of those she had interacted only via the mailing list —individuals who had reviewed her work and those whose code she had studied. 

The recognition she received from other community members, often unexpectedly knowing who she was, played a crucial role in her sense of belonging and validation within the technical community. This recognition, while surprising to her, underscored the impact of her contributions and her growing reputation within the community.

Life Beyond Work 

Balancing life with Nature and Adventure

Ciara’s life outside her technical career is focused on enhancing her well-being and providing a counterbalance to her intensive work in tech. 

A dedicated hiker, she has participated in significant events like a charity hike for Cystic Fibrosis Ireland with colleague Pable De Lara Guarch, where a group of hikers scaled Mt. Kilimanjaro, in Tanzania, (5,895 meters) to watch Siobhan Brady set a new world record performing her Celtic harp at the summit! 

This particular hike, dubbed the “highest harp concert,” is one of life’s highlights she fondly recalls. You can watch the incredible performance here

Ciara finds a unique kind of solace close to nature, living just minutes from the coast in the south of Ireland. Her daily walks on the beach, and in the summer, swimming in the ocean are more than just routine; they are a fundamental aspect of her life, crucial for her mental and physical well-being. 

These moments by the sea allow her to unwind, reflect, and regain balance, proving essential for maintaining her productivity and creativity in her professional life.

As she prepares to transition from Intel, with plans to move to Sydney, Australia, Ciara looks forward to exploring new professional landscapes and personal adventures. This move not only signifies a change in her career but also underscores her willingness to embrace new experiences and challenges, whether in tech or in her personal pursuits. 

The future holds unknowns, but Ciara approaches it with enthusiasm and excitement about the possibilities that lie ahead in both her professional and personal life.

To learn more about the benefits of contributing to DPDK read on here

DPDK Dispatch May

By Monthly Newsletter

1. Main Announcements

3. Blogs, User Stories and Developer Spotlights

4. DPDK & Technologies in the news:

5. Performance Reports & Meeting Minutes

This newsletter is sent out to thousands of DPDK developers, it’s a collaborative effort. If you have a project release, pull request, community event, and/or relevant article you would like to be considered as a highlight for next month, please reply to marketing@dpdk.org

Thank you for your continued support and enthusiasm.

DPDK Team.

DPDK Long Term Stable (LTS) Release 22.11.05

By Blog

The latest DPDK Long Term Stable (LTS) Release 22.11.05 includes several updates and enhancements across various components of the DPDK framework. Significant changes in this release involve numerous file modifications, which are indicated by a large number of insertions and deletions across the codebase. The release notes document extensive additions, suggesting improvements and new features in the areas of network interface controllers (NICs), cryptographic devices, event devices, and baseband processing.

Notable adjustments were made to the build system, documentation, and driver support for various hardware. Improvements in error handling, memory management, and device operation stability are also reflected in the release notes. The release also addresses various bug fixes and performance enhancements to ensure better stability and efficiency.

Contributors

The update involved 246 files with a total of 3,235 insertions and 2,053 deletions. A big shout out to all the contributors to this release including:

Ajit Khaparde, Akhil Goyal, Akshay Dorwat, Alan Elder, Alex Vesker, Ali Alnubani, Andrew Boyer, Anoob Joseph, Arkadiusz Kusztal, Bing Zhao, Bruce Richardson, Chaoyong He, Chengwen Feng, Ciara Power, Dariusz Sosnowski, David Marchand, Dengdui Huang, Edwin Brossette, Eli Britstein, Emi Aoki, Erez Shitrit, Ferruh Yigit, Fidel Castro, Flore Norceide, Ganapati Kundapura, Gregory Etelson, Hamdan Igbaria, Hanumanth Pothula, Hao Chen, Harman Kalra, Hernan Vargas, Holly Nichols, Huisong Li, Jie Hai, Jonathan Erb, Joyce Kong, Kaiwen Deng, Kalesh AP, Kevin Traynor, Kiran Kumar K, Kishore Padmanabha, Kommula Shiva Shankar, Konstantin Ananyev, Kumara Parameshwaran, Long Li, Luca Boccassi, Maayan Kashani, Masoumeh Farhadi Nia, Maxime Coquelin, Michael Baum, Mingjin Ye, Morten Brørup, Mário Kuka, Neel Patel, Nithin Dabilpuram, Pavan Nikhilesh, Pengfei Sun, Qi Zhang, Qian Hao, Radu Nicolau, Rahul Bhansali, Rakesh Kudurumalla, Robin Jarry, Rongwei Liu, Satheesh Paul, Shai Brandes, Shaowei Sun, Shihong Wang, Shun Hao, Simei Su, Sivaprasad Tummala, Sivaramakrishnan Venkat, Stephen Hemminger, Suanming Mou, Sunil Kumar Kori, Sunyang Wu, Tom Jones, Viacheslav Ovsiienko, Wathsala Vithanage, Weiguo Li, Yajun Wu, and Yunjian Wang.

These contributors addressed various aspects from software fixes and performance enhancements to security improvements across multiple components of the system.

Download it here: DPDK 22.11.5

The git tree for this version can be accessed here: DPDK Stable 22.11