[dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK
Vincent Jardin
vincent.jardin at 6wind.com
Wed Oct 26 20:37:05 CEST 2016
Le 26 octobre 2016 2:11:26 PM "Van Haaren, Harry"
<harry.van.haaren at intel.com> a écrit :
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
>>
>> So far, I have received constructive feedback from Intel, NXP and Linaro folks.
>> Let me know, if anyone else interested in contributing to the definition of
>> eventdev?
>>
>> If there are no major issues in proposed spec, then Cavium would like work on
>> implementing and up-streaming the common code(lib/librte_eventdev/) and
>> an associated HW driver.(Requested minor changes of v2 will be addressed
>> in next version).
>
> Hi All,
>
> I will propose a minor change to the rte_event struct, allowing some bits
> to be implementation specific. Currently the rte_event struct has no space
> to allow an implementation store any metadata about the event. For software
> performance it would be really helpful if there are some bits available for
> the implementation to keep some flags about each event.
>
> I suggest to rework the struct as below which opens 6 bits that were
> otherwise wasted, and define them as implementation specific. By
> implementation specific it is understood that the implementation can
> overwrite any information stored in those bits, and the application must
> not expect the data to remain after the event is scheduled.
>
> OLD:
> struct rte_event {
> uint32_t flow_id:24;
> uint32_t queue_id:8;
> uint8_t sched_type; /* Note only 2 bits of 8 are required */
>
> NEW:
> struct rte_event {
> uint32_t flow_id:24;
> uint32_t sched_type:2; /* reduced size : but 2 bits is enough for the
> enqueue types Ordered,Atomic,Parallel.*/
> uint32_t implementation:6; /* available for implementation specific
> metadata */
> uint8_t queue_id; /* still 8 bits as before */
Bitfileds are efficients on Octeon. What's about other CPUs you have in
mind? x86 is not as efficient.
>
>
> Thoughts? -Harry
More information about the dev
mailing list