examples/dma: support DMA dequeue when no packet received

Message ID 20220725081212.4473-1-fengchengwen@huawei.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series examples/dma: support DMA dequeue when no packet received |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/github-robot: build success github build: passed

Commit Message

fengchengwen July 25, 2022, 8:12 a.m. UTC
  Currently the example using DMA in asynchronous mode, which are:
	nb_rx = rte_eth_rx_burst();
	if (nb_rx == 0)
		continue;
	...
	dma_enqueue(); // enqueue the received packets copy request
	nb_cpl = dma_dequeue(); // get copy completed packets
	...

There are no waiting inside dma_dequeue(), and this is why it's called
asynchronus. If there are no packet received, it won't call
dma_dequeue(), but some packets may still in the DMA queue which
enqueued in last cycle. As a result, when the traffic is stopped, the
sent packets and received packets are unbalanced from the perspective
of the traffic generator.

The patch supports DMA dequeue when no packet received, it helps to
judge the test result by comparing the sent packets with the received
packets on traffic generator sides.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 examples/dma/dmafwd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Bruce Richardson July 25, 2022, 10:01 a.m. UTC | #1
On Mon, Jul 25, 2022 at 04:12:12PM +0800, Chengwen Feng wrote:
> Currently the example using DMA in asynchronous mode, which are:
> 	nb_rx = rte_eth_rx_burst();
> 	if (nb_rx == 0)
> 		continue;
> 	...
> 	dma_enqueue(); // enqueue the received packets copy request
> 	nb_cpl = dma_dequeue(); // get copy completed packets
> 	...
> 
> There are no waiting inside dma_dequeue(), and this is why it's called
> asynchronus. If there are no packet received, it won't call
> dma_dequeue(), but some packets may still in the DMA queue which
> enqueued in last cycle. As a result, when the traffic is stopped, the
> sent packets and received packets are unbalanced from the perspective
> of the traffic generator.
> 
> The patch supports DMA dequeue when no packet received, it helps to
> judge the test result by comparing the sent packets with the received
> packets on traffic generator sides.
> 
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
>  examples/dma/dmafwd.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
> index 67b5a9b22b..e3fe226dff 100644
> --- a/examples/dma/dmafwd.c
> +++ b/examples/dma/dmafwd.c
> @@ -408,7 +408,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
>  		nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
>  			pkts_burst, MAX_PKT_BURST);
>  
> -		if (nb_rx == 0)
> +		if (nb_rx == 0 && copy_mode != COPY_MODE_DMA_NUM)
>  			continue;
>  
>  		port_statistics.rx[rx_config->rxtx_port] += nb_rx;

With this change, we would work through the all the receive packet
processing code, and calling all it's functions, just witha packet count of
zero. I therefore wonder if it would be cleaner to do the dma_dequeue
immediately here on receiving zero, and then jumping to handle those
dequeued packets. Something like the diff below.

/Bruce

@@ -408,8 +408,13 @@ dma_rx_port(struct rxtx_port_config *rx_config)
                nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
                        pkts_burst, MAX_PKT_BURST);
 
-               if (nb_rx == 0)
+               if (nb_rx == 0) {
+                       if (copy_mode == COPY_MODE_DMA_NUM &&
+                                       (nb_rx = dma_dequeue(pkts_burst, pkts_burst_copy,
+                                               MAX_PKT_BURST, rx_config->dmadev_ids[i])) > 0)
+                               goto handle_tx;
                        continue;
+               }
 
                port_statistics.rx[rx_config->rxtx_port] += nb_rx;
 
@@ -450,6 +455,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
                                        pkts_burst_copy[j]);
                }
 
+handle_tx:
                rte_mempool_put_bulk(dma_pktmbuf_pool,
                        (void *)pkts_burst, nb_rx);
  
fengchengwen July 25, 2022, 12:31 p.m. UTC | #2
On 2022/7/25 18:01, Bruce Richardson wrote:
> On Mon, Jul 25, 2022 at 04:12:12PM +0800, Chengwen Feng wrote:

...

>> -		if (nb_rx == 0)
>> +		if (nb_rx == 0 && copy_mode != COPY_MODE_DMA_NUM)
>>  			continue;
>>  
>>  		port_statistics.rx[rx_config->rxtx_port] += nb_rx;
> 
> With this change, we would work through the all the receive packet
> processing code, and calling all it's functions, just witha packet count of
> zero. I therefore wonder if it would be cleaner to do the dma_dequeue
> immediately here on receiving zero, and then jumping to handle those
> dequeued packets. Something like the diff below.
> 
> /Bruce

Hi Bruce,

  Thank your review, already fix in V2.

> 
> @@ -408,8 +408,13 @@ dma_rx_port(struct rxtx_port_config *rx_config)
>                 nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
>                         pkts_burst, MAX_PKT_BURST);
>  
> -               if (nb_rx == 0)
> +               if (nb_rx == 0) {
> +                       if (copy_mode == COPY_MODE_DMA_NUM &&
> +                                       (nb_rx = dma_dequeue(pkts_burst, pkts_burst_copy,
> +                                               MAX_PKT_BURST, rx_config->dmadev_ids[i])) > 0)
> +                               goto handle_tx;
>                         continue;
> +               }
>  
>                 port_statistics.rx[rx_config->rxtx_port] += nb_rx;
>  
> @@ -450,6 +455,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
>                                         pkts_burst_copy[j]);
>                 }
>  
> +handle_tx:
>                 rte_mempool_put_bulk(dma_pktmbuf_pool,
>                         (void *)pkts_burst, nb_rx);
> 
> 
> .
>
  

Patch

diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 67b5a9b22b..e3fe226dff 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -408,7 +408,7 @@  dma_rx_port(struct rxtx_port_config *rx_config)
 		nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
 			pkts_burst, MAX_PKT_BURST);
 
-		if (nb_rx == 0)
+		if (nb_rx == 0 && copy_mode != COPY_MODE_DMA_NUM)
 			continue;
 
 		port_statistics.rx[rx_config->rxtx_port] += nb_rx;