[dpdk-users] IP Pipeline QoS

Manoj Mallawaarachchi manoj_ws at yahoo.com
Sat Sep 30 19:33:56 CEST 2017


Hi Cristian & BL,

Thanks for the detail feedback, I also exploring smiler work. Can you please elaborate more in how to plug QOS scheduler to pipe line with qnq

I'm not clearly understanding below clearly;

encap = ethernet_qinq
qinq_sched = test
ip_hdr_offset = 270

Can you elaborate more on:

1) How to configure qos scheduler to pipe-line in configuration file the scenario describe Item #3
2) I have a single network (192.168.1.x) how will qnq will work this scenario with QoS scheduler and pipe-line edge_router_downstream.
3) I'm going to use my QoS app like forwarding gateway local network and Internet and back (INTERNET browsing)

Your advice highly appreciated to move forward.

Thank you,
Manoj M
--------------------------------------------
On Fri, 9/29/17, Dumitrescu, Cristian <cristian.dumitrescu at intel.com> wrote:

 Subject: Re: [dpdk-users] IP Pipeline QoS
 To: "longtb5 at viettel.com.vn" <longtb5 at viettel.com.vn>, "users at dpdk.org" <users at dpdk.org>
 Date: Friday, September 29, 2017, 9:30 PM
 
 Hi BL,
 
 My answers inline below:
 
 > -----Original Message-----
 > From: longtb5 at viettel.com.vn
 [mailto:longtb5 at viettel.com.vn]
 > Sent: Saturday, September 23, 2017 9:00
 AM
 > To: users at dpdk.org
 > Cc: Dumitrescu, Cristian <cristian.dumitrescu at intel.com>
 > Subject: IP Pipeline QoS
 > 
 > 
 > Hi,
 > I am trying to
 build a QoS/Traffic management application using packet
 > framework. The initial goal is to be able
 to  configure traffic flow for upto 1000
 > users, *individually*, through the front
 end cmdline.
 
 Makes sense,
 you can map each subscriber/user to its own pipe (L3 node in
 the hierarchy), which basically results in 16x queues per
 subscriber split into 4x traffic classes.
 
 > Atm I'm looking at
 ip_pipeline's edge_router_downstream sample and the
 > qos_sched app for starting point.
 
 Yes, these are good starting
 points.
 
 > I have a few
 questions:
 > 
 > 1. The
 traffic management pipeline in edge_router_downstream.cfg
 is
 > configured as followed:
 > 
 > [PIPELINE2]
 > type = PASS-THROUGH
 >
 pktq_in = SWQ0 SWQ1 SWQ2 SWQ3 TM0 TM1 TM2 TM3
 > pktq_out = TM0 TM1 TM2 TM3 SWQ4 SWQ5 SWQ6
 SWQ7
 > 
 > I'm not
 exactly sure how this works. My thinking is that since this
 is a passthru
 > table with no action, the
 output of SWQ0 gets connected
 > to the
 input of TM0 and the output of TM0 gets connected to input
 of SWQ4,
 > effectively route SWQ0 to SWQ4
 through TM0. Is that correct?
 
 Yes, you got it right.
 
 > 
 > 2. If that's the
 case, why don't we do it this way:
 >
 
 > [PIPELINE1]
 > type
 = ROUTING
 > pktq_in = RXQ0.0 RXQ1.0
 RXQ2.0 RXQ3.0
 > pktq_out = TM0 TM1 TM2
 TM3 SINK0
 > 
 >
 [PIPELINE2]
 > type = PASS-THROUGH
 > pktq_in = TM0 TM1 TM2 TM3
 > pktq_out = TM0 TM1 TM2 TM3
 > 
 > [PIPELINE3]
 > type = PASS-THROUGH
 >
 pktq_in = TM0 TM1 TM2 TM3
 > pktq_out =
 TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
 > 
 > In other words, why do we need SWQs in
 this case? (and in general what is
 > the
 typical use of SWQs?)
 > 
 
 Great question!
 
 First, I think what you are
 trying to suggest looks more like this one below, as we need
 to have a single producer and consumer for each TM,
 right?
 
 [PIPELINE1]
 type = ROUTING
 pktq_in = RXQ0.0
 RXQ1.0 RXQ2.0 RXQ3.0
 pktq_out = TM0 TM1 TM2
 TM3 SINK0
 
 [PIPELINE2]
 type = PASS-THROUGH
 pktq_in =
 TM0 TM1 TM2 TM3
 pktq_out = TXQ0.0 TXQ1.0
 TXQ2.0 TXQ3.0
 
 Second, this
 approach only works when both of these pipelines are on the
 same CPU (logical) core, as the TM port underlying rte_sched
 object has the restriction that enque() and dequeue() for
 the same port must be executed by the same thread. So
 eliminating the SWQs is actually dangerous, as you might
 later decide to push the two pipelines to different CPU
 cores (which can be quickly done through the ip_pipeline
 config file). So keeping the SWQs allow treating the TMs as
 internal objects to their pipeline, hence better
 encapsulation.
 
 Third, what
 is the benefit of saving some SWQs? If pipelines are on
 different CPU cores, then the SWQs are a must due to thread
 safety. If pipelines are on same CPU core, then the SWQ
 producer and consumer are the same thread, so SWQ
 enqueue/dequeue overhead is very small (L1 cache
 read/write), so eliminating them does not provide any real
 performance benefit.
 
 Makes
 sense?
 
 > 3. I understand
 the fast/slow table copy mechanism for querying/updating
 > _tables_ through the front end. How should
 I go about querying/updating
 > pipe
 profile, which are parts of TM _ports_ if I'm not
 mistaken. For example,
 > to get/set the
 rate of tc 0 of pipe profile 0.
 > Put it
 another way, how can I configure tm_profile.cfg
 interactively through
 > the CLI?
 > Is it even possible to configure TMs
 on-the-fly like that?
 > 
 
 Yes, it is possible to do
 on-the-fly updates to TM configuration. This is done by
 re-invoking rte_sched_subport/pipe_config() functions after
 TM init has been completed.
 
 Unfortunately we don't have the CLI
 commands for this yet in ip_pipeline application, so you
 would have to write them yourself (straightforward).
 
 > Thanks,
 > BL
 
 Regards,
 Cristian
 


More information about the users mailing list