[dpdk-moving] proposal for DPDK CI improvement

O'Driscoll, Tim tim.odriscoll at intel.com
Mon Nov 7 11:34:55 CET 2016


> -----Original Message-----
> From: moving [mailto:moving-bounces at dpdk.org] On Behalf Of Jerome Tollet
> (jtollet)
> Sent: Monday, November 7, 2016 10:27 AM
> To: Thomas Monjalon <thomas.monjalon at 6wind.com>; Xu, Qian Q
> <qian.q.xu at intel.com>
> Cc: moving at dpdk.org; Liu, Yong <yong.liu at intel.com>; ci at dpdk.org
> Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> 
> Hi Thomas & Qian,
> IMHO, performance results should be centralized and executed in a
> trusted & controlled environment.
> If official DPDK numbers are coming from private lab’s vendors,
> perception might be that they are not 100% neutral. That would probably
> not help DPDK community to be seen open & transparent.

+1

Somebody (Jan Blunck I think) also said on last week's call that performance testing was a higher priority than CI for a centralized lab. A model where we have centralized performance test and distributed CI might work well.

> 
> Jerome
> 
> Le 07/11/2016 11:17, « moving au nom de Thomas Monjalon » <moving-
> bounces at dpdk.org au nom de thomas.monjalon at 6wind.com> a écrit :
> 
>     Hi Qian,
> 
>     2016-11-07 07:55, Xu, Qian Q:
>     > I think the discussion about CI is a good start. I agreed on the
> general ideas:
>     > 1. It's good to have more contributors for CI and it's a community
> effort.
>     > 2. Building a distributed CI system is good and necessary.
>     > 3. "When and Where" is the very basic and important questions.
>     >
>     > Add my 2 cents here.
>     > 1.  Distributed test vs Centralized lab
>     > We can put the build and functional tests on our distributed lab.
> As to the performance, as we all know, performance is key to DPDK.
>     > So I suggested we can have the centralized lab for the performance
> testing, and some comments as below:
>     > a). Do we want to publish the performance report on different
> platforms with different HW/NICs? Anyone against on publishing
> performance numbers?
>     > b). If the answer to the first question is "Yes", so how to ensure
> others trust the performance and how to reproduce the performance if we
> don't have the platforms/HWs?
>     > As Marvin said, transparency and independence is the advantage for
> open centralized lab. Besides, we can demonstrate to all audience about
> DPDK performance with the
>     > Lab. Of course, we need the control of the system, not allow
> others to access it randomly. It's another topic of access control. I
> even think that if the lab can be used as
>     > the training lab or demo lab when we have the community training
> or performance demo days(I just named the events).
>     >
>     > 2. Besides "When and Where", then "What" and "How"
>     > When:
>     > 	- regularly on a git tree ---what tests need to be done here?
> Propose to have the daily build, daily functional regression, daily
> performance regression
>     > 	- after each patch submission -> report available via
> patchwork----what tests need to be done? Build test as the first one,
> maybe we can add functional or performance in future.
>     >
>     > How to collect and display the results?
>     > Thanks Thomas for the hard work on patchwork upgrade. And it's
> good to see the CheckPatch display here.
>     > IMHO, to build the complete distributed system needs very big
> effort. Thomas, any effort estimation and the schedule for it?
> 
>     It must be a collective effort.
>     I plan to publish a new git repository really soon to help building
> a test lab.
>     The first version will allow to send some test reports correctly
> formatted.
>     The next step will be to help applying patches (on right branch with
> series support).
> 
>     > a). Currently, there is only " S/W/F for Success/Warning/Fail
> counters" in tests, so does it refer to build test or functional test or
> performance test?
> 
>     It can be any test, including performance ones. A major performance
> regression
>     must be seen as a failed test.
> 
>     > If it only referred to build test, then you may need change the
> title to Build S/W/F. Then how many architecture or platforms for the
> builds? For example, we support Intel IA build,
>     > ARM build, IBM power build. Then we may need collect build results
> from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the
> build is passed on IA but failed on IBM, then we
>     > Need record it as 1S/0W/1F. I don't know if we need collect the
> warning information here.
> 
>     The difference between warnings and failures is a matter of
> severity.
>     The checkpatch errors are reported as warnings.
> 
>     > b). How about performance result display on website? No matter
> distributed or centralized lab, we need a place to show the performance
> number or the performance trend to
>     > ensure no performance regression? Do you have any plan to
> implement it?
> 
>     No I have no plan but I expect it to be solved by ones working on
>     performance tests, maybe you? :)
>     If a private lab can publish some web graphs of performance
> evolutions, it is great.
>     If we can do it in a centralized lab, it is also great.
>     If we can have a web interface to gather every performance numbers
> and graphs,
>     it is really really great!
> 
>     > 3.  Proposal to have a CI mailing list for people working on CI to
> have the regular meetings only discussing about CI? Maybe we can have
> more frequent meetings at first to have an alignment. Then
>     > We can reduce the frequency if the solution is settle down.
> Current call is covering many other topics. What do you think?
> 
>     The mailing list is now created: ci at dpdk.org.
>     About meetings, I feel we can start working through ci at dpdk.org and
> see
>     how efficient it is. Though if you need a meeting, feel free to
> propose.
> 



More information about the moving mailing list