[dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API

Shahaf Shuler shahafs at mellanox.com
Wed Sep 6 08:01:43 CEST 2017


Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin:
> >
> > > > > > > In fact, right now it is possible to query/change these 3
> > > > > > > vlan offload flags on the fly (after dev_start) on  port
> > > > > > > basis by
> > > rte_eth_dev_(get|set)_vlan_offload API.
> >
> > Regarding this API from ethdev.
> >
> > So this seems like a hack on ethdev. Currently there are 2 ways for user to
> set Rx vlan offloads.
> > One is through dev_configure which require the ports to be stopped. The
> other is this API which can set even if the port is started.
> 
> Yes there is an ability to enable/disable VLAN offloads without
> stop/reconfigure the device.
> Though I wouldn't call it 'a hack'.
> From my perspective - it is a useful feature.
> Same as it is possible in some cases to change MTU without stopping device,
> etc.
> 
> >
> > We should have only one place were application set offloads and this
> > is currently on dev_configure,
> 
> Hmm, if HW supports the ability to do things at runtime why we have to stop
> users from using that ability?
> 
> > And future to be on rx_queue_setup.
> >
> > I would say that this API should be removed as well.
> > Application which wants to change those offloads will stop the ports and
> reconfigure the PMD.
> 
> I wouldn't agree - see above.
> 
> > Am quite sure that there are PMDs which need to re-create the Rxq
> > based on vlan offloads changing and this cannot be done while the traffic
> flows.
> 
> That's an optional API - PMD can choose does it want to support it or not.
> 
> >
> >
> > > > > > > So, I think at least these 3 flags need to be remained on a port
> basis.
> > > > > >
> > > > > > I don't understand how it helps to be able to configure the
> > > > > > same thing in 2 places.
> > > > >
> > > > > Because some offloads are per device, another - per queue.
> > > > > Configuring on a device basis would allow most users to conjure
> > > > > all queues in the same manner by default.
> > > > > Those users who would  need more fine-grained setup (per queue)
> > > > > will be able to overwrite it by rx_queue_setup().
> > > >
> > > > Those users can set the same config for all queues.
> > > > >
> > > > > > I think you are just describing a limitation of these HW: some
> > > > > > offloads must be the same for all queues.
> > > > >
> > > > > As I said above - on some devices some offloads might also
> > > > > affect queues that belong to VFs (to another ports in DPDK words).
> > > > > You might never invoke rx_queue_setup() for these queues per
> > > > > your
> > > app.
> > > > > But you still want to enable this offload on that device.
> > >
> > > I am ok with having per-port and per-queue offload configuration.
> > > My concern is that after that patch only per-queue offload
> > > configuration will remain.
> > > I think we need both.
> >
> > So looks like we all agree PMDs should report as part of the
> rte_eth_dev_info_get which offloads are per port and which are per queue.
> 
> Yep.
> 
> >
> > Regarding the offloads configuration by application I see 2 options:
> > 1. have an API to set offloads per port as part of device configure
> > and API to set offloads per queue as part of queue setup 2. set all
> > offloads as part of queue configuration (per port offloads will be set equally
> for all queues). In case of a mixed configuration for port offloads PMD will
> return error.
> >     Such error can be reported on device start. The PMD will traverse the
> queues and check for conflicts.
> >
> > I will focus on the cons, since both achieve the goal:
> >
> > Cons of #1:
> > - Two places to configure offloads.
> 
> Yes, but why is that a problem?

If we could make the offloads API to set the offloads in a single place it would be much cleaner and less error prune.
There is one flow which change the offloads configuration. 
Later on when we want to change/expend it will be much simpler, as all modification can happen in a single place only.

> 
> > - Like Thomas mentioned - what about offloads per device? This direction
> leads to more places to configure the offloads.
> 
> As you said above - there would be 2 places: per port and per queue.
> Could you explain - what other places you are talking about?

In fact, the vlan filter offload for PF is a *per device* offload and not per port. Since the corresponding VF has it just by the fact the PF set it on dev_configure.
So to be exact, such offload should be set on a new offload section called "per device offloads".
Currently you compromise on setting it in the *per port* offload section, with proper explanation on the VF limitation in intel.

> 
> >
> > Cons of #2:
> > - Late error reporting - on device start and not on queue setup.
> 
> Consider scenario when PF has a corresponding VFs (PF is controlled by
> DPDK) Right now (at least with Intel HW) it is possible to:
> 
> struct rte_eth_conf dev_conf;
>  dev_conf. rxmode.hw_vlan_filter = 1;
> ...
> rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
> rte_eth_dev_start(pf_port_id);
> 
> In that scenario I don't have any RX/TX queues configured.
> Though I still able to enable vlan filter, and it would work correctly for VFs.
> Same for other per-port offloads.

For the PF - enabling vlan filtering without any queues means nothing. The PF can receive no traffic, what different does it makes the vlan filtering is set?
For the VF - I assume it will have queues, therefore for it vlan filtering has a meaning. However as I said above, the VF has the vlan filter because in intel this is per-device offload, so this is not a good example. 

Which other per-port offloads you refer to?
I don't understand what is the meaning of setting per-port offloads without opening any Tx/Rx queues. 


> With approach #2 it simply wouldn't work.

Yes for vlan filtering it will not work on intel, and this may be enough to move to suggestion #1.

Thomas?

> 
> So my preference is still #1.
> 
> Konstantin
> 
> >
> > I would go with #2.
> >
> > > Konstantin
> > >
> > > >
> > > > You are advocating for per-port configuration API because some
> > > > settings must be the same on all the ports of your hardware?
> > > > So there is a big trouble. You don't need per-port settings, but
> > > > per-hw-device settings.
> > > > Or would you accept more fine-grained per-port settings?
> > > > If yes, you can accept even finer grained per-queues settings.
> > > > >
> > > > > > It does not prevent from configuring them in the per-queue setup.
> > > > > >
> > > > > > > In fact, why can't we have both per port and per queue RX
> offload:
> > > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply
> > > > > > > them on
> > > a port basis.
> > > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and
> > > > > > > apply
> > > them on a queue basis.
> > > > > > > - if particular RX_OFFLOAD flag for that device couldn't be
> > > > > > > setup on a
> > > queue basis  -
> > > > > > >    rx_queue_setup() will return an error.
> > > > > >
> > > > > > The queue setup can work while the value is the same for every
> > > queues.
> > > > >
> > > > > Ok, and how people would know that?
> > > > > That for device N offload X has to be the same for all queues,
> > > > > and for device M offload X can be differs for different queues.
> > > >
> > > > We can know the hardware limitations by filling this information
> > > > at PMD init.
> > > >
> > > > > Again, if we don't allow to enable/disable offloads for
> > > > > particular queue, why to bother with updating rx_queue_setup() API
> at all?
> > > >
> > > > I do not understand this question.
> > > >
> > > > > > > - rte_eth_rxq_info can be extended to provide information
> > > > > > > which
> > > RX_OFFLOADs
> > > > > > >   can be configured on a per queue basis.
> > > > > >
> > > > > > Yes the PMD should advertise its limitations like being forced
> > > > > > to apply the same configuration to all its queues.
> > > > >
> > > > > Didn't get your last sentence.
> > > >
> > > > I agree that the hardware limitations must be written in an ethdev
> > > structure.


More information about the dev mailing list