[dpdk-dev,v3,1/2] ethdev: introduce Rx queue offloads API

Message ID dcf25792cdb0da6e9a0b412f97786151fe571b88.1505284270.git.shahafs@mellanox.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Shahaf Shuler Sept. 13, 2017, 6:37 a.m. UTC
  Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)
  

Comments

Andrew Rybchenko Sept. 13, 2017, 8:13 a.m. UTC | #1
On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.

I think it would be useful to have the description in the documentation.
It is really important topic on how per-port and per-queue offloads coexist
and rules should be 100% clear for PMD maintainers and application
developers.

Please, also highlight how per-port and per-queue capabilities should be
advertised. I mean if per-queue capability should be reported as per-port
as well. I'd say no to avoid duplication of per-queue capabilities in two
places. If so, could you explain why to enable it should be specified in
both places. How should be treated configuration when the offload is
enabled on port, but disabled on queue level.

> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  |  33 ++++----
>   lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>   3 files changed, 210 insertions(+), 30 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 37ffbc68c..4e68144ef 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -179,7 +179,7 @@ Jumbo frame
>   
>   Supports Rx jumbo frames.
>   
> -* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,

May be it should be removed from documentation when it is removed from 
sources?
I have no strong opinion, but it would be more clear to find it in the 
documentation
with its status specified (obsolete)
The note is applicable to all similar cases below.

[snip]

> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 0adf3274a..ba7a2b2dc 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h

[snip]

> @@ -907,6 +934,18 @@ struct rte_eth_conf {
>   #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
>   #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
>   #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
> +#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
> +#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
> +#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
> +#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
> +#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
> +#define DEV_RX_OFFLOAD_SCATTER		0x00002000
> +#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> +				 DEV_RX_OFFLOAD_UDP_CKSUM | \
> +				 DEV_RX_OFFLOAD_TCP_CKSUM)
> +#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
> +			     DEV_RX_OFFLOAD_VLAN_FILTER | \
> +			     DEV_RX_OFFLOAD_VLAN_EXTEND)

It is not directly related to the patch, but I'd like to highlight that 
Rx/Tx are asymmetric here
since SCTP is missing for Rx, but present for Tx.

[snip]
  
Andrew Rybchenko Sept. 13, 2017, 8:49 a.m. UTC | #2
On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.
>
> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  |  33 ++++----
>   lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>   3 files changed, 210 insertions(+), 30 deletions(-)

[snip]

> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0597641ee..b3c10701e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
>   	}
>   }
>   
> +/**
> + * A conversion function from rxmode bitfield API.
> + */
> +static void
> +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
> +				    uint64_t *rx_offloads)
> +{
> +	uint64_t offloads = 0;
> +
> +	if (rxmode->header_split == 1)
> +		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> +	if (rxmode->hw_ip_checksum == 1)
> +		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> +	if (rxmode->hw_vlan_filter == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> +	if (rxmode->hw_vlan_strip == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> +	if (rxmode->hw_vlan_extend == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> +	if (rxmode->jumbo_frame == 1)
> +		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (rxmode->hw_strip_crc == 1)
> +		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> +	if (rxmode->enable_scatter == 1)
> +		offloads |= DEV_RX_OFFLOAD_SCATTER;
> +	if (rxmode->enable_lro == 1)
> +		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
> +
> +	*rx_offloads = offloads;
> +}
> +
> +/**
> + * A conversion function from rxmode offloads API.
> + */
> +static void
> +rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
> +			    struct rte_eth_rxmode *rxmode)
> +{
> +
> +	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
> +		rxmode->header_split = 1;
> +	else
> +		rxmode->header_split = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
> +		rxmode->hw_ip_checksum = 1;
> +	else
> +		rxmode->hw_ip_checksum = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
> +		rxmode->hw_vlan_filter = 1;
> +	else
> +		rxmode->hw_vlan_filter = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> +		rxmode->hw_vlan_strip = 1;
> +	else
> +		rxmode->hw_vlan_strip = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
> +		rxmode->hw_vlan_extend = 1;
> +	else
> +		rxmode->hw_vlan_extend = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> +		rxmode->jumbo_frame = 1;
> +	else
> +		rxmode->jumbo_frame = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
> +		rxmode->hw_strip_crc = 1;
> +	else
> +		rxmode->hw_strip_crc = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
> +		rxmode->enable_scatter = 1;
> +	else
> +		rxmode->enable_scatter = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
> +		rxmode->enable_lro = 1;
> +	else
> +		rxmode->enable_lro = 0;
> +}
> +
>   int
>   rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>   		      const struct rte_eth_conf *dev_conf)
>   {
>   	struct rte_eth_dev *dev;
>   	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_conf local_conf = *dev_conf;
>   	int diag;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>   		return -EBUSY;
>   	}
>   
> +	/*
> +	 * Convert between the offloads API to enable PMDs to support
> +	 * only one of them.
> +	 */
> +	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
> +		rte_eth_convert_rx_offload_bitfield(
> +				&dev_conf->rxmode, &local_conf.rxmode.offloads);
> +	} else {
> +		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
> +					    &local_conf.rxmode);

Ignore flag is lost here and it will result in treating txq_flags as the 
primary
information about offloads. It is important in the case of failsafe PMD.

> +	}
> +
>   	/* Copy the dev_conf parameter into the dev structure */
> -	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
> +	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
>   
>   	/*
>   	 * Check that the numbers of RX and TX queues are not greater

[snip]
  
Andrew Rybchenko Sept. 13, 2017, 9:13 a.m. UTC | #3
On 09/13/2017 11:49 AM, Andrew Rybchenko wrote:
> On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
>> Introduce a new API to configure Rx offloads.
>>
>> In the new API, offloads are divided into per-port and per-queue
>> offloads. The PMD reports capability for each of them.
>> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
>> To enable per-port offload, the offload should be set on both device
>> configuration and queue configuration. To enable per-queue offload, the
>> offloads can be set only on queue configuration.
>>
>> Applications should set the ignore_offload_bitfield bit on rxmode
>> structure in order to move to the new API.
>>
>> The old Rx offloads API is kept for the meanwhile, in order to enable a
>> smooth transition for PMDs and application to the new API.
>>
>> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>> ---
>>   doc/guides/nics/features.rst  |  33 ++++----
>>   lib/librte_ether/rte_ethdev.c | 156 
>> +++++++++++++++++++++++++++++++++----
>>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>>   3 files changed, 210 insertions(+), 30 deletions(-) 

[snip]

>> diff --git a/lib/librte_ether/rte_ethdev.c 
>> b/lib/librte_ether/rte_ethdev.c
>> index 0597641ee..b3c10701e 100644

[snip]

>> @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t 
>> nb_rx_q, uint16_t nb_tx_q,
>>           return -EBUSY;
>>       }
>>   +    /*
>> +     * Convert between the offloads API to enable PMDs to support
>> +     * only one of them.
>> +     */
>> +    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
>> +        rte_eth_convert_rx_offload_bitfield(
>> +                &dev_conf->rxmode, &local_conf.rxmode.offloads);
>> +    } else {
>> + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
>> +                        &local_conf.rxmode);
>
> Ignore flag is lost here and it will result in treating txq_flags as 
> the primary
> information about offloads. It is important in the case of failsafe PMD.

Sorry, I mean rxmode (not txq_flags).

[snip]
  
Shahaf Shuler Sept. 13, 2017, 12:33 p.m. UTC | #4
Wednesday, September 13, 2017 12:13 PM, Andrew Rybchenko:
>>return -EBUSY;

>>      }

>>  +    /*

>>+     * Convert between the offloads API to enable PMDs to support

>>+     * only one of them.

>>+     */

>>+    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {

>>+        rte_eth_convert_rx_offload_bitfield(

>>+                &dev_conf->rxmode, &local_conf.rxmode.offloads);

>>+    } else {

>>+        rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,

>>+                        &local_conf.rxmode);


>Ignore flag is lost here and it will result in treating txq_flags as the primary

>information about offloads. It is important in the case of failsafe PMD.

>

>Sorry, I mean rxmode (not txq_flags).



Am not sure the ignore_offload_bitfield is lost on converstion. The convert function does not assign to it.
  
Andrew Rybchenko Sept. 13, 2017, 12:34 p.m. UTC | #5
On 09/13/2017 03:33 PM, Shahaf Shuler wrote:
>
> Wednesday, September 13, 2017 12:13 PM, Andrew Rybchenko:
>
>         >>return -EBUSY;
>         >>      }
>         >>  +    /*
>         >>+     * Convert between the offloads API to enable PMDs to
>         support
>         >>+     * only one of them.
>         >>+     */
>         >>+    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
>         >>+ rte_eth_convert_rx_offload_bitfield(
>         >>+ &dev_conf->rxmode, &local_conf.rxmode.offloads);
>         >>+    } else {
>         >>+ rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
>         >>+ &local_conf.rxmode);
>
>
>     >Ignore flag is lost here and it will result in treating txq_flags
>     as the primary
>     >information about offloads. It is important in the case of
>     failsafe PMD.
>
> >
> >Sorry, I mean rxmode (not txq_flags).
>
> Am not sure the ignore_offload_bitfield is lost on converstion. The 
> convert function does not assign to it.
>

That's true. My bad.
  
Shahaf Shuler Sept. 13, 2017, 12:49 p.m. UTC | #6
Wednesday, September 13, 2017 11:13 AM, Andrew Rybchenko:
On 09/13/2017 09:37 AM, Shahaf Shuler wrote:

>I think it would be useful to have the description in the documentation.

>It is really important topic on how per-port and per-queue offloads coexist

>and rules should be 100% clear for PMD maintainers and application

>developers.


OK.

>Please, also highlight how per-port and per-queue capabilities should be

>advertised. I mean if per-queue capability should be reported as per-port

>as well. I'd say no to avoid duplication of per-queue capabilities in two

>places.


I will add documentation. Offloads can be reported in only one cap – either it is per-port or per-queue.


>If so, could you explain why to enable it should be specified in

>both places.


It is set also in the queue setup to emphasize the queue also have this offload. Logically it can be avoided, however I thought it is good to have, to make it explicit to application and PMDs.

How should be treated configuration when the offload is
>enabled on port, but disabled on queue level.


In that case the queue setup should return with error. As the application tries do a mixed configuration for per-port offload.



>>diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst


>>index 37ffbc68c..4e68144ef 100644


>>--- a/doc/guides/nics/features.rst


>>+++ b/doc/guides/nics/features.rst


>>@@ -179,7 +179,7 @@ Jumbo frame


>>


>> Supports Rx jumbo frames.


>>


>>-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,


>May be it should be removed from documentation when it is removed from sources?

>I have no strong opinion, but it would be more clear to find it in the documentation

>with its status specified (obsolete)


I think it will complex the documentation. The old API is obsoleted. If PMD developer thinks on how to implement a new feature, and read this doc,  it should implement according to the new API.


>[snip]




>>@@ -907,6 +934,18 @@ struct rte_eth_conf {


>> #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020


>> #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040


>> #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080


>>+#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100


>>+#define DEV_RX_OFFLOAD_VLAN_FILTER  0x00000200


>>+#define DEV_RX_OFFLOAD_VLAN_EXTEND  0x00000400


>>+#define DEV_RX_OFFLOAD_JUMBO_FRAME  0x00000800


>>+#define DEV_RX_OFFLOAD_CRC_STRIP    0x00001000


>>+#define DEV_RX_OFFLOAD_SCATTER              0x00002000


>>+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \


>>+                             DEV_RX_OFFLOAD_UDP_CKSUM | \


>>+                             DEV_RX_OFFLOAD_TCP_CKSUM)


>>+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \


>>+                          DEV_RX_OFFLOAD_VLAN_FILTER | \


>>+                          DEV_RX_OFFLOAD_VLAN_EXTEND)


>It is not directly related to the patch, but I'd like to highlight that Rx/Tx are asymmetric here

>since SCTP is missing for Rx, but present for Tx.


Right. This can be added on a different series.




[snip]
  

Patch

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@  Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@  Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@  LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@  VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@  CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@  VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@  QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@  L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@  L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@  MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@  Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0597641ee..b3c10701e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -687,12 +687,90 @@  rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -722,8 +800,20 @@  rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -767,7 +857,7 @@  rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1004,6 +1094,7 @@  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1074,8 +1165,18 @@  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -1979,7 +2080,8 @@  rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2055,23 +2157,41 @@  rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2080,6 +2200,13 @@  rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2094,13 +2221,16 @@  rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0adf3274a..ba7a2b2dc 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@  struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@  struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@  struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@  struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@  struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1870,6 +1912,9 @@  uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1923,6 +1968,8 @@  void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.