[PATCH v8 1/3] ethdev: introduce protocol hdr based buffer split

Andrew Rybchenko andrew.rybchenko at oktetlabs.ru
Sat Jun 4 16:25:54 CEST 2022


On 6/3/22 19:30, Ding, Xuan wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko at oktetlabs.ru>
>> Sent: Thursday, June 2, 2022 9:21 PM
>> To: Wu, WenxuanX <wenxuanx.wu at intel.com>; thomas at monjalon.net; Li,
>> Xiaoyun <xiaoyun.li at intel.com>; ferruh.yigit at xilinx.com; Singh, Aman Deep
>> <aman.deep.singh at intel.com>; dev at dpdk.org; Zhang, Yuying
>> <yuying.zhang at intel.com>; Zhang, Qi Z <qi.z.zhang at intel.com>;
>> jerinjacobk at gmail.com
>> Cc: stephen at networkplumber.org; Ding, Xuan <xuan.ding at intel.com>;
>> Wang, YuanX <yuanx.wang at intel.com>; Ray Kinsella <mdr at ashroe.eu>
>> Subject: Re: [PATCH v8 1/3] ethdev: introduce protocol hdr based buffer split
>>
>> Is it the right one since it is listed in patchwork?
> 
> Yes, it is.
> 
>>
>> On 6/1/22 16:50, wenxuanx.wu at intel.com wrote:
>>> From: Wenxuan Wu <wenxuanx.wu at intel.com>
>>>
>>> Currently, Rx buffer split supports length based split. With Rx queue
>>> offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet
>> segment
>>> configured, PMD will be able to split the received packets into
>>> multiple segments.
>>>
>>> However, length based buffer split is not suitable for NICs that do
>>> split based on protocol headers. Given a arbitrarily variable length
>>> in Rx packet
>>
>> a -> an
> 
> Thanks for your catch, will fix it in next version.
> 
>>
>>> segment, it is almost impossible to pass a fixed protocol header to PMD.
>>> Besides, the existence of tunneling results in the composition of a
>>> packet is various, which makes the situation even worse.
>>>
>>> This patch extends current buffer split to support protocol header
>>> based buffer split. A new proto_hdr field is introduced in the
>>> reserved field of rte_eth_rxseg_split structure to specify protocol
>>> header. The proto_hdr field defines the split position of packet,
>>> splitting will always happens after the protocol header defined in the
>>> Rx packet segment. When Rx queue offload
>>> RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding
>> protocol
>>> header is configured, PMD will split the ingress packets into multiple
>> segments.
>>>
>>> struct rte_eth_rxseg_split {
>>>
>>>           struct rte_mempool *mp; /* memory pools to allocate segment from
>> */
>>>           uint16_t length; /* segment maximal data length,
>>>                               configures "split point" */
>>>           uint16_t offset; /* data offset from beginning
>>>                               of mbuf data buffer */
>>>           uint32_t proto_hdr; /* inner/outer L2/L3/L4 protocol header,
>>> 			       configures "split point" */
>>>       };
>>>
>>> Both inner and outer L2/L3/L4 level protocol header split can be supported.
>>> Corresponding protocol header capability is RTE_PTYPE_L2_ETHER,
>>> RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV6, RTE_PTYPE_L4_TCP,
>>> RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_INNER_L2_ETHER,
>>> RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6,
>>> RTE_PTYPE_INNER_L4_TCP, RTE_PTYPE_INNER_L4_UDP,
>> RTE_PTYPE_INNER_L4_SCTP.
>>>
>>> For example, let's suppose we configured the Rx queue with the
>>> following segments:
>>>       seg0 - pool0, proto_hdr0=RTE_PTYPE_L3_IPV4, off0=2B
>>>       seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B
>>>       seg2 - pool2, off1=0B
>>>
>>> The packet consists of MAC_IPV4_UDP_PAYLOAD will be split like
>>> following:
>>>       seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from
>> pool0
>>>       seg1 - udp header @ 128 in mbuf from pool1
>>>       seg2 - payload @ 0 in mbuf from pool2
>>
>> It must be defined how ICMPv4 packets will be split in such case.
>> And how UDP over IPv6 will be split.
> 
> The ICMP header type is missed, I will define the expected split behavior and
> add it in next version, thanks for your catch.
> 
> In fact, the buffer split based on protocol header depends on the driver parsing result.
> As long as driver can recognize this packet type, I think there is no difference between
> UDP over IPV4 and UDP over IPV6?

We can bind it to ptypes recognized by the HW+driver, but I can
easily imagine the case when HW has no means to report recognized
packet type (i.e. ptype get returns empty list), but still could
split on it.
Also, nobody guarantees that there is no different in UDP over IPv4 vs
IPv6 recognition and split. IPv6 could have a number of extension
headers which could be not that trivial to hop in HW. So, HW could
recognize IPv6, but not protocols after it.
Also it is very interesting question how to define protocol split
for IPv6 plus extension headers. Where to stop?

> 
>>>
>>> Now buffet split can be configured in two modes. For length based
>>> buffer split, the mp, length, offset field in Rx packet segment should
>>> be configured, while the proto_hdr field should not be configured.
>>> For protocol header based buffer split, the mp, offset, proto_hdr
>>> field in Rx packet segment should be configured, while the length
>>> field should not be configured.
>>>
>>> The split limitations imposed by underlying PMD is reported in the
>>> rte_eth_dev_info->rx_seg_capa field. The memory attributes for the
>>> split parts may differ either, dpdk memory and external memory,
>> respectively.
>>>
>>> Signed-off-by: Xuan Ding <xuan.ding at intel.com>
>>> Signed-off-by: Yuan Wang <yuanx.wang at intel.com>
>>> Signed-off-by: Wenxuan Wu <wenxuanx.wu at intel.com>
>>> Reviewed-by: Qi Zhang <qi.z.zhang at intel.com>
>>> Acked-by: Ray Kinsella <mdr at ashroe.eu>
>>> ---
>>>    lib/ethdev/rte_ethdev.c | 40 +++++++++++++++++++++++++++++++++-------
>>>    lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++-
>>>    2 files changed, 60 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
>>> 29a3d80466..fbd55cdd9d 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -1661,6 +1661,7 @@ rte_eth_rx_queue_check_split(const struct
>> rte_eth_rxseg_split *rx_seg,
>>>    		struct rte_mempool *mpl = rx_seg[seg_idx].mp;
>>>    		uint32_t length = rx_seg[seg_idx].length;
>>>    		uint32_t offset = rx_seg[seg_idx].offset;
>>> +		uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr;
>>>
>>>    		if (mpl == NULL) {
>>>    			RTE_ETHDEV_LOG(ERR, "null mempool pointer\n");
>> @@ -1694,13
>>> +1695,38 @@ rte_eth_rx_queue_check_split(const struct
>> rte_eth_rxseg_split *rx_seg,
>>>    		}
>>>    		offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM;
>>>    		*mbp_buf_size = rte_pktmbuf_data_room_size(mpl);
>>> -		length = length != 0 ? length : *mbp_buf_size;
>>> -		if (*mbp_buf_size < length + offset) {
>>> -			RTE_ETHDEV_LOG(ERR,
>>> -				       "%s mbuf_data_room_size %u < %u
>> (segment length=%u + segment offset=%u)\n",
>>> -				       mpl->name, *mbp_buf_size,
>>> -				       length + offset, length, offset);
>>> -			return -EINVAL;
>>> +		if (proto_hdr == RTE_PTYPE_UNKNOWN) {
>>> +			/* Split at fixed length. */
>>> +			length = length != 0 ? length : *mbp_buf_size;
>>> +			if (*mbp_buf_size < length + offset) {
>>> +				RTE_ETHDEV_LOG(ERR,
>>> +					"%s mbuf_data_room_size %u < %u
>> (segment length=%u + segment offset=%u)\n",
>>> +					mpl->name, *mbp_buf_size,
>>> +					length + offset, length, offset);
>>> +				return -EINVAL;
>>> +			}
>>> +		} else {
>>> +			/* Split after specified protocol header. */
>>> +			if (!(proto_hdr &
>> RTE_BUFFER_SPLIT_PROTO_HDR_MASK)) {
>>
>> The condition looks suspicious. It will be true if proto_hdr has no single bit
>> from the mask. I guess it is not the intent.
> 
> Actually it is the intent... Here the mask is used to check if proto_hdr
> belongs to the inner/outer L2/L3/L4 capability we defined. And which
> proto_hdr is supported by the NIC will be checked in the PMD later.

Frankly speaking I see no value in such incomplete check if
we still rely on driver. I simply see no reason to oblige the
driver to support one of these protocols.

> 
>> I guess the condition should be
>>     proto_hdr & ~RTE_BUFFER_SPLIT_PROTO_HDR_MASK i.e. there is
>> unsupported bits in proto_hdr
>>
>> IMHO we need extra field in dev_info to report supported protocols to split
>> on. Or a new API to get an array similar to ptype get.
>> May be a new API is a better choice to not overload dev_info and to be more
>> flexible in reporting.
> 
> Thanks for your suggestion.
> Here I hope to confirm the intent of dev_info or API to expose the supported proto_hdr of driver.
> Is it for the pro_hdr check in the rte_eth_rx_queue_check_split()?
> If so, could we just check whether pro_hdrs configured belongs to L2/L3/L4 in lib, and check the
> capability in PMD? This is what the current design does.

Look. Application needs to know what to expect from eth device.
It should know which protocols it can split on. Of course we can
enforce application to use try-fail approach which would make sense
if we have dedicated API to request Rx buffer split, but since it
is done via Rx queue configuration, it could be tricky for application
to realize which part of the configuration is wrong. It could simply
result in a too many retries with different configuration.

I.e. the information should be used by ethdev to validate request and
the information should be ued by the application to understand what is
supported.

> 
> Actually I have another question, do we need a API or dev_info to expose which buffer split the driver supports.
> i.e. length based or proto_hdr based. Because it requires different fields to be configured
> in RX packet segment.

See above. If dedicated API return -ENOTSUP or empty set of supported
protocols to split on, the answer is clear.

> 
> Hope to get your insights. :)
> 
>>
>>> +				RTE_ETHDEV_LOG(ERR,
>>> +					"Protocol header %u not
>> supported)\n",
>>> +					proto_hdr);
>>
>> I think it would be useful to log unsupported bits only, if we say so.
> 
> The same as above.
> Thanks again for your time.
> 
> Regards,
> Xuan


More information about the dev mailing list