[dpdk-dev] [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues

Rao, Nikhil nikhil.rao at intel.com
Thu Jul 27 12:58:29 CEST 2017


Hi Jerin and all,

There are a few inconsistencies/complexities that I ran into with the 
implementation of the SW Rx event adapter, I have first summarized this 
email thread bringing together details scattered across various 
exchanges then I want to check if there are changes possible that would 
simplify the implementation of the SW Rx event adapter.

</start summary>
The Rx event adapter needs to support the following scenarios:

1) Ethdev HW is not capable of injecting the packets and SW eventdev
driver(All existing ethdev PMD + drivers/event/sw PMD combination)
2) Ethdev HW is not capable of injecting the packets and not compatible
HW eventdev driver(All existing ethdev PMD + driver/event/octeontx PMD
combination)
3) Ethdev HW is capable of injecting the packet to compatible
HW eventdev driver.

cases 1) and 2) above are not different. In both cases we need a SW 
thread that will inject packets from the ethdev PMD to an event dev PMD.

The APIs proposed are:

int rte_event_eth_rx_adapter_create(uint8_t dev_id,
         uint8_t eth_port_id, uint8_t id /* adapter ID */);

An adapter created above has an ops struct, where the ops
struct provides implementations for the functions below.
The ops struct chosen for the adapter depends on the <dev_id, 
eth_port_id> combination.

struct rte_event_eth_rx_adap_info {
         uint32_t cap;

/* adapter has inbuilt port, no need to create producer port */
#define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
/* adapter does not need service function */
#define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)

}
int rte_event_eth_rx_adapter_get_info(uint8_t dev_id, uint8_t id,
             struct rte_event_eth_rx_adap_info *info);

struct rte_event_eth_rx_adapter_conf {
	/* Application specified service function name */
	char service_name[];
	uint8_t rx_event_port_id;
          /**< Event port identifier, the adapter enqueues mbuf
        * events to this
            * port, Ignored when RTE_EVENT_ETHDEV_CAP_INBUILT_PORT
            */
  }
int rte_event_eth_rx_adapter_configure(uint8_t dev_id, uint8_t id
             struct rte_event_eth_rx_adapter_conf *cfg);

  struct rte_event_eth_rx_adapter_queue_conf {
     ... event info ...
          uint16_t servicing_weight;
     /**< Relative polling frequency of ethernet receive queue, if this
      * is set to zero, the Rx queue is interrupt driven (unless rx queue
      * interrupts are not enabled for the ethernet device)
      */
     ...
  }
int rte_event_eth_rx_adapter_queue_add(uint8_t dev_id,
             uint8_t id, int32_t rx_queue_id,
     const struct rte_eth_rx_event_adapter_queue_conf *config);

int rte_event_eth_rx_adapter_queue_del(uint8_t dev_id,
             uint8_t id, int32_t rx_queue_id)

</end summary>

In the case of a SW thread we would like to use the servicing weight 
specified in the queue to do WRR across <ports, queues[]>, in keeping 
with the adaper per <eventdev, eth port> model, one way to do this is to 
use the same cfg.service_name in the 
rte_event_eth_rx_adapter_configure() call.

However this creates a few difficulties/inconsistencies:

1)Service has the notion of a socket id. Multiple event dev IDs can be 
included in the same service, each event dev has a socket ID -> this 
seems to be an inconsistency that shouldn’t be allowed by design.

2)Say, the Rx event adapter doesn’t drop packets (could be 
configurable), i.e,  if events cannot be enqueued into the event device, 
these remain in a buffer, when the buffer fills up packets aren’t 
dequeued from the eth device.

In the simplest case the Rx event adapter service has a single <event 
device, event port> across multiple eth ports, it dequeues from the 
wrr[] and buffers events, bulk enqueues BATCH_SIZE events into the 
<event device, event port>.

With adapters having different <event device, event port> code can be 
optimized so that adapters that have a common <event device, event port> 
can be made to refer to a common enqueue buffer { event dev, event port, 
buffer } structure but this adds more book keeping in the code.

3)Every adapter can be configured with max_nb_rx ( a max nb of packets 
that it can process in any invocation) – but the max_nb_rx seems like a 
service level parameter instead of it being a summation across adapters.

1 & 3 could be solved by restricting the adapters to the same (as in the 
first rte_event_eth_rx_adapter_configure() call) socket ID, and perhaps 
using the max value of max_nb_rx or using the same value of max_nb_rx 
across adapters. #2 is doable but has a bit of code complexity to handle 
the generic case.

Before we go there, I wanted to check if there is an alternative 
possible that would remove the difficulties above. Essentially allow 
multiple ports within an adapter but avoid the problem of the 
inconsistent <eventdev, port> combinations when using multiple ports 
with a single eventdev.

Instead of
==
rte_event_eth_rx_adapter_create()
rte_event_eth_rx_adapter_get_info();
rte_event_eth_rx_adapter_configure();
rte_event_eth_rx_adapter_queue_add();
==

How about ?
==

rte_event_eth_rx_adapter_get_info(uint8_t dev_id, uint8_t eth_port_id,
         struct rte_event_eth_rx_adap_info *info);

struct rte_event_eth_rx_adap_info {
         uint32_t cap;

/* adapter has inbuilt port, no need to create producer port */
#define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
/* adapter does not need service function */
#define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)

}

rte_event_eth_rx_adapter_conf cfg;
cfg.event_port = event_port;
cfg.service_name = “rx_adapter_service”;

// all ports in eth_port_id[] have cap = 
//!RTE_EVENT_ETHDEV_CAP_INBUILT_PORT
// && ! RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC
rte_event_eth_rx_adapter_create(dev_id, eth_port_id[], N, id, &cfg);
===
int rte_event_eth_rx_adapter_queue_add() would need a port id in the N>1 
port case, that can be ignored if the adapter doesn’t need it (N=1).

thanks for reading the long email, thoughts ?

Nikhil


More information about the dev mailing list