[dpdk-dev] [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues

Nipun Gupta nipun.gupta at nxp.com
Mon Jul 31 05:57:08 CEST 2017



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> Sent: Saturday, July 29, 2017 20:43
> To: Rao, Nikhil <nikhil.rao at intel.com>
> Cc: gage.eads at intel.com; dev at dpdk.org; thomas at monjalon.net;
> bruce.richardson at intel.com; harry.van.haaren at intel.com; Hemant Agrawal
> <hemant.agrawal at nxp.com>; Nipun Gupta <nipun.gupta at nxp.com>;
> narender.vangati at intel.com; Abhinandan Gujjar
> <abhinandan.gujjar at intel.com>
> Subject: Re: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
> 
> -----Original Message-----
> > Date: Thu, 27 Jul 2017 16:28:29 +0530
> > From: "Rao, Nikhil" <nikhil.rao at intel.com>
> > To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > CC: gage.eads at intel.com, dev at dpdk.org, thomas at monjalon.net,
> >  bruce.richardson at intel.com, harry.van.haaren at intel.com,
> >  hemant.agrawal at nxp.com, nipun.gupta at nxp.com,
> narender.vangati at intel.com,
> >  Abhinandan Gujjar <abhinandan.gujjar at intel.com>, nikhil.rao at intel.com
> > Subject: Re: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
> > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0)
> Gecko/20100101
> >  Thunderbird/52.2.1
> >
> >
> >
> > In the case of a SW thread we would like to use the servicing weight
> > specified in the queue to do WRR across <ports, queues[]>, in keeping with
> 
> OK, then lets work together to address in transparent manner where it
> works for HW and SW.
> 
> > the adaper per <eventdev, eth port> model, one way to do this is to use the
> > same cfg.service_name in the rte_event_eth_rx_adapter_configure() call.
> >
> > However this creates a few difficulties/inconsistencies:
> 
> I agree. If we are thinking about WRR across <ports,queues[]> then above
> proposal implementation creates inconsistencies. On the other side, it create
> challenges
> with HW implementation to have unified adapter API works for both HW and
> SW.
> 
> >
> > 1)Service has the notion of a socket id. Multiple event dev IDs can be
> > included in the same service, each event dev has a socket ID -> this seems
> > to be an inconsistency that shouldn’t be allowed by design.
> >
> > 2)Say, the Rx event adapter doesn’t drop packets (could be configurable),
> > i.e,  if events cannot be enqueued into the event device, these remain in a
> > buffer, when the buffer fills up packets aren’t dequeued from the eth
> > device.
> >
> > In the simplest case the Rx event adapter service has a single <event
> > device, event port> across multiple eth ports, it dequeues from the wrr[]
> > and buffers events, bulk enqueues BATCH_SIZE events into the <event device,
> > event port>.
> >
> > With adapters having different <event device, event port> code can be
> > optimized so that adapters that have a common <event device, event port>
> can
> > be made to refer to a common enqueue buffer { event dev, event port, buffer
> > } structure but this adds more book keeping in the code.
> >
> > 3)Every adapter can be configured with max_nb_rx ( a max nb of packets that
> > it can process in any invocation) – but the max_nb_rx seems like a service
> > level parameter instead of it being a summation across adapters.
> >
> > 1 & 3 could be solved by restricting the adapters to the same (as in the
> > first rte_event_eth_rx_adapter_configure() call) socket ID, and perhaps
> > using the max value of max_nb_rx or using the same value of max_nb_rx
> across
> > adapters. #2 is doable but has a bit of code complexity to handle the
> > generic case.
> >
> > Before we go there, I wanted to check if there is an alternative possible
> > that would remove the difficulties above. Essentially allow multiple ports
> > within an adapter but avoid the problem of the inconsistent <eventdev, port>
> > combinations when using multiple ports with a single eventdev.
> >
> > Instead of
> > ==
> > rte_event_eth_rx_adapter_create()
> > rte_event_eth_rx_adapter_get_info();
> > rte_event_eth_rx_adapter_configure();
> > rte_event_eth_rx_adapter_queue_add();
> > ==
> >
> > How about ?
> > ==
> >
> > rte_event_eth_rx_adapter_get_info(uint8_t dev_id, uint8_t eth_port_id,
> >         struct rte_event_eth_rx_adap_info *info);
> >
> > struct rte_event_eth_rx_adap_info {
> >         uint32_t cap;
> >
> > /* adapter has inbuilt port, no need to create producer port */
> > #define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
> > /* adapter does not need service function */
> > #define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)
> >
> > }
> >
> > rte_event_eth_rx_adapter_conf cfg;
> > cfg.event_port = event_port;
> > cfg.service_name = “rx_adapter_service”;
> 
> Does application need to specify the service name? IMO, it better a
> component(rx_adapter) defines it name and format and expose in
> rte_event_eth_rx_adapter.h
> 
> >
> > // all ports in eth_port_id[] have cap =
> > //!RTE_EVENT_ETHDEV_CAP_INBUILT_PORT
> > // && ! RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC
> > rte_event_eth_rx_adapter_create(dev_id, eth_port_id[], N, id, &cfg);
> 
> The downside might be:
> - application has different flow based on based on the capability.
> Listing down a few capabilities/limitation below.
> 
> > ===
> > int rte_event_eth_rx_adapter_queue_add() would need a port id in the N>1
> > port case, that can be ignored if the adapter doesn’t need it (N=1).
> >
> > thanks for reading the long email, thoughts ?
> 
> I have bit another thought to solve the above mentioned downside.
> 
> - Theme is based on your original rx adapter proposal but with eventpmd
>   ops(not adapter ops).i.e Reuse as much of your existing Rx adapter
> implementation as common code and add hooks for HW based adapters. For
> example, before you add  <ethdev, queue_id> to "rx_poll" in
> eth_poll_wrr_calc(),
> Check for eventdev PMD ops is available adding it HW. If yes, Don't add in
> "rx_poll"

This seems better approach. Infact we were also thinking on similar approach while
reviewing the initial patch. This will also avoid redundant code for creation of services
by separate event PMD's.

> 
> adapter_api
> ------------
> int rte_event_eth_rx_adapter_create(id, rte_event_eth_rx_adapter_conf *conf)
> int rte_event_eth_rx_adapter_queue_add(uint8_t id, uint8_t eth_dev_id, int32_t
> rx_queue_id, rte_event_eth_rx_adapter_queue_conf *conf);

Just wanted to know your opinion on adding other adapters such as for crypto devices.

One way is to create separate adapter for crypto devices (or any other devices which may
be linked with event devices). But this would again cause redundancy in service creations.

In my opinion these API's can be independent of "_eth" suffix and have an enum of "device
type" in adapter_capablity, queue_add and queue_delete API, so that the model is not
limited for one type of device. For this series only ethernet type be supported but having
possibility of adding other devices in future.

Views on this?

> 
> eventdev PMD op api(not as adapter PMD as discussed earlier)
> -------------------
> 
> 1) typedef uint64_t (*eventdev_rx_adap_capa)(struct rte_eventdev *dev,
> uint8_t ethdev_id)
> 
> Return the adapter capability of a given eventdev when it needs to
> connected to a specific ethdev_id
> 
> Possible capability values based on my understating for existing SW and Cavium
> HW PMD. NXP folks can add new ones.
> 
> - RX_ADAPTER_CAP_INBUILT_PORT - /* adapter has inbuilt port, no need to
> create producer port by common code */
> - RX_ADAPTER_CAP_SET_FLOW_ID  - /* adapter capable of setting
> RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID */
> - RX_ADAPTER_CAP_ADD_QUEUE /* adapter capable of adding any specific
> ethdev rx queue to any eventdev queue. Some eventdev PMD has a limitation
> that once a < ethdev_id , queue_id> connected to specific eventdev queue,
> all the all queues_id under the same ethdev_id need to be connected to
> same eventdev queue. aka works only on the
> rte_event_eth_rx_adapter_queue_conf.rx_queue_id == -1 mode, */

These are more than sufficient for NXP's PMD's :)

> 
> 
> 2) typedef int (*eventdev_rx_adap_add)(struct rte_eventdev *dev,  uint8_t
> ethdev_id, int queue_id, rte_event_eth_rx_adapter_queue_conf *conf));
> -- if implemented by eventdev PMD and returns zero then COMMON code does
> not need to poll */
> 
> 
> 3) typedef int (*eventdev_rx_adap_del)(struct rte_eventdev *dev,  uint8_t
> ethdev_id, int queue_id)
> -- remove previously added
> 
> 
> *** Haven't spend a lot of time on API/macro name.Please use better naming
> conversion.
> 
> 
> Another notes based on your existing implementation  + eventdev ops scheme
> 
> 1) rte_event_eth_rx_adapter_creates() registers service function by
> default. It should be delayed to when common adapter code find a device
> with !RX_ADAPTER_CAP_INBUILT_PORT cap on
> rte_event_eth_rx_adapter_queue_add()
> 
> 2) Do we need rx_adapter start/stop functions?
> 
> 3) If it happens to be case where rte_event_eth_rx_adapter_queue_add()
> use only RX_ADAPTER_CAP_INBUILT_PORT then common code should not
> create any service.
> 
> 4) If adapter uses one port with service core and other one with HW
> adapter. rte_event_eth_rx_adapter_stats.rx_packets will be not updated
> correctly, We need eventdev PMD ops to get those stats. If we agree
> overall PMD ops + adapter API partitioning then we can refine additionally
> eventpmd for stats etc or xstat based scheme etc.

Can't the stats be summed up from the eventdev (if ops is implemented) and from 
the service running for this. Agree that xstats would be more beneficial.

Thanks,
Nipun

> 
> 5) specifying rte_event_eth_rx_adapter_conf.rx_event_port_id on
> rte_event_eth_rx_adapter_create() would waste one HW eventdev port if its
> happen to be used RX_ADAPTER_CAP_INBUILT_PORT on
> rte_event_eth_rx_adapter_queue_add().
> unlike SW eventdev port, HW eventdev ports are costly so I think, We
> need to have another eventdev PMD ops to create service/producer ports.
> Or any other scheme that creates
> rte_event_eth_rx_adapter_conf.rx_event_port_id
> on demand by common code.
> 
> Thoughts?



More information about the dev mailing list