[dpdk-dev] [RFC v3 1/1] lib: add compressdev API

Ahmed Mansour ahmed.mansour at nxp.com
Mon Jan 29 18:16:05 CET 2018


On 1/29/2018 7:26 AM, Verma, Shally wrote:
> Hi
>
>> -----Original Message-----
>> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
>> Sent: 26 January 2018 00:13
>> To: Verma, Shally <Shally.Verma at cavium.com>; Ahmed Mansour
>> <ahmed.mansour at nxp.com>; dev at dpdk.org; Akhil Goyal
>> <akhil.goyal at nxp.com>
>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
>> Prasad <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
>> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
>> <Ashish.Gupta at cavium.com>; Sahu, Sunila <Sunila.Sahu at cavium.com>;
>> Jain, Deepak K <deepak.k.jain at intel.com>; Hemant Agrawal
>> <hemant.agrawal at nxp.com>; Roy Pledge <roy.pledge at nxp.com>; Youri
>> Querry <youri.querry_1 at nxp.com>; Trahe, Fiona <fiona.trahe at intel.com>
>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>
>> Hi Shally, Ahmed,
>>
>>
>>> -----Original Message-----
>>> From: Verma, Shally [mailto:Shally.Verma at cavium.com]
>>> Sent: Thursday, January 25, 2018 10:25 AM
>>> To: Ahmed Mansour <ahmed.mansour at nxp.com>; Trahe, Fiona
>> <fiona.trahe at intel.com>;
>>> dev at dpdk.org; Akhil Goyal <akhil.goyal at nxp.com>
>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
>> Prasad
>>> <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
>> <pablo.de.lara.guarch at intel.com>;
>>> Gupta, Ashish <Ashish.Gupta at cavium.com>; Sahu, Sunila
>> <Sunila.Sahu at cavium.com>; Jain, Deepak K
>>> <deepak.k.jain at intel.com>; Hemant Agrawal
>> <hemant.agrawal at nxp.com>; Roy Pledge
>>> <roy.pledge at nxp.com>; Youri Querry <youri.querry_1 at nxp.com>
>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ahmed Mansour [mailto:ahmed.mansour at nxp.com]
>>>> Sent: 25 January 2018 01:06
>>>> To: Verma, Shally <Shally.Verma at cavium.com>; Trahe, Fiona
>>>> <fiona.trahe at intel.com>; dev at dpdk.org; Akhil Goyal
>>>> <akhil.goyal at nxp.com>
>>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
>>>> Prasad <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
>>>> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
>>>> <Ashish.Gupta at cavium.com>; Sahu, Sunila <Sunila.Sahu at cavium.com>;
>>>> Jain, Deepak K <deepak.k.jain at intel.com>; Hemant Agrawal
>>>> <hemant.agrawal at nxp.com>; Roy Pledge <roy.pledge at nxp.com>; Youri
>>>> Querry <youri.querry_1 at nxp.com>
>>>> Subject: Re: [RFC v3 1/1] lib: add compressdev API
>>>>
>>>> Hi All,
>>>>
>>>> Please see responses in line.
>>>>
>>>> Thanks,
>>>>
>>>> Ahmed
>>>>
>>>> On 1/23/2018 6:58 AM, Verma, Shally wrote:
>>>>> Hi Fiona
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
>>>>>> Sent: 19 January 2018 17:30
>>>>>> To: Verma, Shally <Shally.Verma at cavium.com>; dev at dpdk.org;
>>>>>> akhil.goyal at nxp.com
>>>>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
>>>>>> Prasad <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch,
>> Pablo
>>>>>> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
>>>>>> <Ashish.Gupta at cavium.com>; Sahu, Sunila
>> <Sunila.Sahu at cavium.com>;
>>>>>> Jain, Deepak K <deepak.k.jain at intel.com>; Hemant Agrawal
>>>>>> <hemant.agrawal at nxp.com>; Roy Pledge <roy.pledge at nxp.com>;
>> Youri
>>>>>> Querry <youri.querry_1 at nxp.com>; Ahmed Mansour
>>>>>> <ahmed.mansour at nxp.com>; Trahe, Fiona <fiona.trahe at intel.com>
>>>>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>>>>
>>>>>> Hi Shally,
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Verma, Shally [mailto:Shally.Verma at cavium.com]
>>>>>>> Sent: Thursday, January 18, 2018 12:54 PM
>>>>>>> To: Trahe, Fiona <fiona.trahe at intel.com>; dev at dpdk.org
>>>>>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya,
>> Narayana
>>>>>> Prasad
>>>>>>> <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
>>>>>> <pablo.de.lara.guarch at intel.com>;
>>>>>>> Gupta, Ashish <Ashish.Gupta at cavium.com>; Sahu, Sunila
>>>>>> <Sunila.Sahu at cavium.com>; Jain, Deepak K
>>>>>>> <deepak.k.jain at intel.com>; Hemant Agrawal
>>>>>> <hemant.agrawal at nxp.com>; Roy Pledge
>>>>>>> <roy.pledge at nxp.com>; Youri Querry <youri.querry_1 at nxp.com>;
>>>>>> Ahmed Mansour
>>>>>>> <ahmed.mansour at nxp.com>
>>>>>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>>>>>
>>>>>>> Hi Fiona
>>>>>>>
>>>>>>> While revisiting this, we identified few questions and additions.
>> Please
>>>> see
>>>>>> them inline.
>>>>>>>> -----Original Message-----
>>>>>>>> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
>>>>>>>> Sent: 15 December 2017 23:19
>>>>>>>> To: dev at dpdk.org; Verma, Shally <Shally.Verma at cavium.com>
>>>>>>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya,
>> Narayana
>>>>>>>> Prasad <NarayanaPrasad.Athreya at cavium.com>;
>>>>>>>> pablo.de.lara.guarch at intel.com; fiona.trahe at intel.com
>>>>>>>> Subject: [RFC v3 1/1] lib: add compressdev API
>>>>>>>>
>>>>>>>> Signed-off-by: Trahe, Fiona <fiona.trahe at intel.com>
>>>>>>>> ---
>>>>>>> //snip
>>>>>>>
>>>>>>>> +
>>>>>>>> +int
>>>>>>>> +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t
>>>>>>>> queue_pair_id,
>>>>>>>> +		uint32_t max_inflight_ops, int socket_id)
>>>>>>> [Shally] Is max_inflights_ops different from nb_streams_per_qp in
>>>> struct
>>>>>> rte_compressdev_info?
>>>>>>> I assume they both carry same purpose. If yes, then it will be better
>> to
>>>> use
>>>>>> single naming convention to
>>>>>>> avoid confusion.
>>>>>> [Fiona] No, I think they have different purposes.
>>>>>> max_inflight_ops should be used to configure the qp with the number
>> of
>>>> ops
>>>>>> the application expects to be able to submit to the qp before it needs
>> to
>>>> poll
>>>>>> for a response. It can be configured differently for each qp. In the QAT
>>>> case it
>>>>>> dictates the depth of the qp created, it may have different
>> implications on
>>>>>> other PMDs.
>>>>>> nb_sessions_per_qp and nb_streams_per_qp are limitations the
>> devices
>>>>>> reports and are same for all qps on the device. QAT doesn't have
>> those
>>>>>> limitations and so would report 0, however I assumed they may be
>>>> necessary
>>>>>> for other devices.
>>>>>> This assumption is based on the patch submitted by NXP to cryptodev
>> in
>>>> Feb
>>>>>> 2017
>>>>>>
>> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpd
>>>> k.org%2Fml%2Farchives%2Fdev%2F2017-
>>>>
>> March%2F060740.html&data=02%7C01%7Cahmed.mansour%40nxp.com%7C
>> b012d74d7530493b155108d56258955f%7C686ea1d3bc2b4c6fa92cd99c5c30163
>> 5%7C0%7C0%7C636523054981379413&sdata=2SazlEazMxcBGS7R58CpNrX0G5
>>>> OeWx8PLMwf%2FYzqv34%3D&reserved=0
>>>>>> I also assume these are not necessarily the max number of sessions in
>> ops
>>>> on
>>>>>> the qp at a given time, but the total number attached, i.e. if the device
>>>> has
>>>>>> this limitation then sessions must be attached to qps, and presumably
>>>>>> reserve some resources. Being attached doesn't imply there is an op
>> on
>>>> the
>>>>>> qp at that time using that session. So it's not to relating to the inflight
>> op
>>>>>> count, but to the number of sessions attached/detached to the qp.
>>>>>> Including Akhil on the To list, maybe NXP can confirm if these params
>> are
>>>>>> needed.
>>>>> [Shally] Ok. Then let's wait for NXP to confirm on this requirement as
>>>> currently spec doesn't have any API to attach
>>>> queue_pair_to_specific_session_or_stream as cryptodev.
>>>>> But then how application could know limit on max_inflight_ops
>> supported
>>>> on a qp? As it can pass any random number during qp_setup().
>>>>> Do you believe we need to add a capability field in dev_info to indicate
>> limit
>>>> on max_inflight_ops?
>>>>> Thanks
>>>>> Shally
>>>> [Ahmed] @Fiona This looks ok. max_inflight_ops makes sense. I
>> understand
>>>> it as a push back mechanism per qp. We do not have physical limit for
>>>> number of streams or sessions on a qp in our hardware, so we would
>>>> return 0 here as well.
>>>> @Shally in our PMD implementation we do not attach streams or sessions
>>>> to a particular qp. Regarding max_inflight_ops. I think that limit
>>> [Shally] Ok. We too don't have any such limit defined. So, if these are
>> redundant fields then can be
>>> removed until requirement is identified in context of compressdev.
>> [Fiona] Ok, so it seems we're all agreed to remove max_nb_sessions_per_qp
>> and
>>  max_nb_streams_per_qp from rte_compressdev_info.
>> I think we're also agreed to keep max_inflight_ops on the qp_setup.
> [Shally] yes, by me.
[Ahmed] That works.
>
>> It's not available on the info and if I understand you both correctly we don't
>> need to add it there as a hw limitation or capability. 
> [Shally] I'm fine with either ways. No preferences here currently.
[Ahmed] Yes.
>
>> I'd expect the appl to set it to
>> some value which is probably lower than any hardware limitation. The appl
>> may then
>> perform many enqueue_bursts until the qp is full and if unable to enqueue a
>> burst
>> should try dequeueing to free up space on the qp for more enqueue_bursts.
> [Shally] qp not necessarily has to be full (depending upon PMD implementation though) to run into this condition, especially when, say, Hw limit < application max_inflight_ops. 
> Thus, would rephrase it as:
> "application may enqueue bursts up to limit setup in qp_setup and if enqueue_burst() returns with number < total nb_ops , then wait on dequeue to free-up space".
[Ahmed] Agreed. The hard limit is left to the implementation.
>
>> I think the value it's set to can give the application some influence over
>> latency vs throughput.
>> E.g. if it's set to a very large number then it allows the PMD to stockpile
>> requests,
>> which can result in longer latency, but optimal throughput as easier to keep
>> the
>> engines supplied with requests. If set very small, latency may be short, as
>> requests get
>> to engines sooner, but there's a risk of the engines running out of requests
>> if the PMD manages to process everything before the application tops up the
>> qp.
> [Shally] I concur from you.
[Ahmed] Makes sense.
>
>>>
>>>> should be independent of hardware. Not all enqueues must succeed.
>> The
>>>> hardware can push back against the enqueuer dynamically if the
>> resources
>>>> needed to accommodate additional ops are not available yet. This push
>>>> back happens in the software if the user sets a max_inflight_ops that is
>>>> less that the hardware max_inflight_ops. The same return pathway can
>> be
>>>> exercised if the user actually attempts to enqueue more than the
>>>> supported max_inflight_ops because of the hardware.
>>> [Shally] Ok. This sounds fine to me. As you mentioned, we can let
>> application setup a queue pair with
>>> any max_inflight_ops and, during enqueue_burst(), leave it on hardware
>> to consume as much as it can
>>> subject to the limit set in qp_setup().
>>> So, this doesn't seem to be a hard requirement on dev_info to expose.
>> Only knock-on effect I see is,
>>> same testcase can then behave differently with different PMDs as each
>> PMD may have different support
>>> level for same max_inflight_ops in their qp_setup().
>



More information about the dev mailing list