[dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement

Xu, Qian Q qian.q.xu at intel.com
Mon Nov 7 15:22:24 CET 2016


See below. 

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com] 
Sent: Monday, November 7, 2016 8:51 PM
To: Xu, Qian Q <qian.q.xu at intel.com>
Cc: Liu, Yong <yong.liu at intel.com>; ci at dpdk.org
Subject: Re: [dpdk-moving] proposal for DPDK CI improvement

I'm removing the list moving at dpdk.org as we are discussing implementation details.

2016-11-07 12:20, Xu, Qian Q:
> > a). Currently, there is only " S/W/F for Success/Warning/Fail counters"
> > in tests, so does it refer to build test or functional test or 
> > performance test?
> 
> It can be any test, including performance ones.
> A major performance regression must be seen as a failed test. 
> 
> [Qian] If it refer to any test, so how do we know which test has been 
> done. For example, some patch may only do build, some may do perf and 
> some may do functional.
> How to differentiate these tests execution?   

Why do you want to differentiate them in patchwork?
There are 3 views of test reports:
1/ On a patchwork page listing patches, we see the counters S/W/F so we can give attention to patches having some warnings or failures.
2/ On a patchwork page showing only one patch, we see the list of tests, the results and the links to the reports.
3/ In a detailed report from the test-report ml archives, we can see exactly what's going wrong or what was tested successfully (and what are the numbers in case of a performance test).

---To differentiate the test(build, function and performance) that can give a clear view of what tests has been done, and what tests failed. And we don't need click 
the link to know which test failed. 
Now, all tests are sharing one column for the Pass or fail or warnings. For example, we see one patch Test failed, we don't know which test failed or even don't know if 
Which tests are executed. For example, if build failed, but no functional tests executed, then it showed as Failed; while if the build passed but one function test is failed, then 
It also showed as Failed. So from Failed, we don't know failures at which level tests. 
For one patch, I think we may have different requirements for build test, function test and performance test failures.  
To pass build is the 1st priority task, no errors are allowed. While as to functional tests, some failures on function may be acceptable if the pass rate is over 90% . As to 
the performance, the criteria for failure is not clear now, since performance will have some fluctuation and some features will bring some performance drop as the feature 
implementation cost. If we can't differentiate them, we can't easily judge the failure is really critical failures, and we even don't know which tests are executed.
Similarly, for a patch, we have Acked-by, Tested-by and Reviewed-by, which differentiate the different aspect of the patch review status. If we just put it as Pass or failed, how 
Can we know if it's reviewed failure or tested failure or acked failure? We even don't know which activity is done here. Even if there is a link to click see more details about 
Acted-by, Tested-by and Reviewed-by, it's not so convenient. So it's the similar thoughts to differentiate the tests. 

Btw, do you want to each patch have function and performance test? As we discussed before, currently we have 4 repos, then we may not sure if the patch should go to which repo, 
It's fine for the build, but for the function test, if it applied to wrong repo, then it may fail the tests. And we may need think how to solve the issue. 
In short term, I suggest that we only keep Build test for each patch, and run function and performance test on daily basis for the git tree. 




More information about the ci mailing list