Bug 935
Summary: | aesni_mb_pmd does not trigger parallel processing for multiple jobs | ||
---|---|---|---|
Product: | DPDK | Reporter: | changchun zhang (changchun.zhang) |
Component: | cryptodev | Assignee: | dev |
Status: | UNCONFIRMED --- | ||
Severity: | major | CC: | pablo.de.lara.guarch, roy.fan.zhang |
Priority: | Normal | ||
Version: | 20.11 | ||
Target Milestone: | --- | ||
Hardware: | x86 | ||
OS: | Linux |
Description
changchun zhang
2022-02-07 01:56:12 CET
Hi, I would not consider this a ug. This only happens if a single operation is enqueued and the algorithm is implemented in a multi-buffer "way". The PMD expects a burst of packets to be submitted, and therefore, crypto processing is done and some operations are returned, so these last lines of code will not be executed. The purpose of this check is to avoid an scenario where no operations are returned when calling crypto_dequeue_burst() while there are some outstanding operations in the intel-ipsec-mb buffer manager. Thanks, Pablo Hi Pablo, Thanks for looking into it. So in this way, to make sure the operations are parallel processed in the intel-ipsec-mb lib, the application has to enqueue enough operations each time. It seems that the application layer is to be responsible for gathering multiple operations before enqueue the burst, though the intel-ipsec-mb itself usually are waiting for multiple jobs to fill the mutli-lanes for SIMD operations, if no flush command is issued. You know sometimes if the security traffic is not that high thus each time there could be 1 pending operation in the queue. In this situation, parallel computing is not actually not happening if the application does not wait for the multi packets to perform enqueue. So, if this is not a bug, I am wondering if this is a limitation. Thank, Changchun Hi Changchun, Thanks! If the change is done as you modified we will have some packets left in the mb mgr queue in the end, which we cannot allow that. The flush is a necessary "evil" to make sure all packets are returned to the application in the end. But I guess there is no good solution to make everybody happy. However this indeed can be tweaked from application side. If latency is not big issue you may have a counter for enqueued packets, when it reaches certain threshold you can dequeue once - this shall achieve similar parallel as you expected but with one necessary flush in the end. But as a library and PMD we cannot make such assumptions, like we won't know when is the next enqueue/dequeue call. Of course we can enrich the API to either forcing manually flush or other functionalities - feel free to send a patch to the mailing list. Regards, Fan Hi Fan, Yes, I understand my change is just a workaround with the side effect you mentioned. And I agree with you that the application can do something. If we prefer to do nothing in the PMD, I would think it is possible to put more comments to explain the situation or limitation? Anyway, thanks for confirm this observation. Best, Changchun |