[dpdk-dev] 2.3 Roadmap

Yoshinobu Inoue inoue.yoshinobu at jp.fujitsu.com
Wed Dec 2 01:53:37 CET 2015


Hello Bruce,

> Hi,
> 
> that is indeed very similar to what we are thinking ourselves. Is there any of
> what you have already done that you could contribute publically to save us
> duplicating some of your effort? [The one big difference, is that we are not
> thinking of enabling kni permanently for each port, as the ethtool support is
> only present for a couple of NIC types, and solving that is a separate issue.:-)]
> 
> /Bruce

It seems there has been some progress in the thread, but if there is still
anything worth while doing, yes I think I can.
My DPDK 1.6 work has finished and my boss suggested me trying to spend more time
on contibuting something from now on.
I'm trying to get used to DPDK 2.1 (skipping 1.7, 1.8) and will then try 2.2 and
current tree in the near future.


By roughly checking the later progress in the thread, if we can capture packet
flow anywhere in the code path, and can filter it using BPF, it seems quite nice
and desirable.

Just I have another comment from relatively more operational point of view,
once I tried to implement a packet capture API which copies packets to a
shared memory FIFO queue, and outer modified libpcap library read out the
packets from those shared memory FIFO queue, thus enabling anywhere capturing,
but it ended up that I didn't use the functionality very much, as it requires
hard coding of inserting packet capture API each time the capture is required,
and that was somewhat bothering.
There was another simple packet trace mechanism which does tracing on several
pre-defined points, and it was OK for many cases.

I think when introducing this kind of anywhere capturing, it will be quite easy
to use for normal user if each capture point is pre-defined, such as each
packet processing module input and output point, each internal FIFO queue input
and output point.
Then, some another outer tool which displays internal functional topology is
also desirable,
(just ascii art can be OK),
such as showing each capture point by ID number and showing also if capturing of
each of then are enabled or not, and let us enable/disable each capture point
easily by specifying the ID number.

If there could be modified libpcap tools for this mechanism, that would be
further helpful, such as they can specify the capture ID number and dump
the packet from that point, also, can specify and update BPF filter for each
capture point ID number.


Regards,
Yoshinobu Inoue


From: Bruce Richardson <bruce.richardson at intel.com>
Subject: Re: [dpdk-dev] 2.3 Roadmap
Date: Tue, 1 Dec 2015 11:58:16 +0000

> On Tue, Dec 01, 2015 at 08:26:39PM +0900, Yoshinobu Inoue wrote:
>> Hello DPDK list,
>> 
>> I've been so far just roughly reading messages, as I've been working on my
>> company's product based on DPDK1.6 for some time and haven't yet much
>> catched up to newer releases,
>> but as I implemented packet capture for my product just in a way as commented
>> here, (using KNI),
>> 
>> > Our current thinking is to use kni to mirror packets into the kernel itself,
>> > so that all standard linux capture tools can then be used.
>> > 
>> > /Bruce
>> 
>> I felt like giving commets from my humble experiences,,,
>> 
>>  - In our case, KNI is always enabeld for each DPDK port,
>>    as it seemed handy that packet rx/tx stat and up/down status can be checked
>>    by ifconfig as well.
>>    Also, iface MIB become available as well via /sys/devices/pci.../net.
>> 
>>    As far as we checked it, it seemed that just creating KNI itself didn't much
>>    affect Dplane performance.
>>    (Just when we update statistics, there is a bit of overhead.)
>> 
>>  - I inserted rte_kni_tx_burst() call to
>>    packet RX path (after rte_eth_rx_burst) and TX path (after rte_eth_tx_burst,
>>    based on an assumption that just sent out pkt is not yet freed, it will be
>>    freed when the tx descriptor is overwritten by some time later tx packet.).
>> 
>>    The call to rte_kni_tx_burst() is enabled/disabled by some external capture
>>    enable/disable command.
>> 
>>    I copy the packet beforehand and pass the copied one to rte_kni_tx_burst().
>>    In TX path, we might not need to copy if we just increment the packet refcnt
>>    by 1, but haven't yet tried such hack much.
>> 
>>    The packets sent to rte_kni_tx_burst() can then be captured by normal libpcap
>>    tools like tcpdump on the correspondent KNI.
>> 
>>    The performance loss when the capture was enabled was roughly 20-30%.
>>    (Perhaps it might change based on many factors.)
>> 
>>    By the way, in this way, we can enable tx-only or rx-only capture.
>> 
>> 
>>  - Some considerations,
>> 
>>    -  Someone might not like capture on/off check everytime on normal fast path
>>       tx/rx route.
>>       I too, so created fastpath send routine and slowpath send routine,
>>       and switched the function pointer when the capture is enabled/disabled.
>>       But not sure if it was worth for the effort.
>> 
>>    - In this approach, everyone needs to create their own capture enable/disable
>>      command in their implementation, and it could be a bit bothering.
>> 
>>      I myself am not sure if it's possible, but if as in normal tcpdump,
>>      an invocation of tcpdump to a KNI interfce could be somehow notified to
>>      the correspondent DPDK port user application, and then the call to
>>      rte_kni_tx_burst() could be automatically enabled/disabled, that's cool.
>> 
>>    
>> Thanks,
>> Yoshinobu Inoue
>> 
> Hi,
> 
> that is indeed very similar to what we are thinking ourselves. Is there any of
> what you have already done that you could contribute publically to save us
> duplicating some of your effort? [The one big difference, is that we are not
> thinking of enabling kni permanently for each port, as the ethtool support is
> only present for a couple of NIC types, and solving that is a separate issue.:-)]
> 
> /Bruce
> 


More information about the dev mailing list