From patchwork Wed Sep 13 06:04:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 28643 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B69D1B18C; Wed, 13 Sep 2017 08:15:10 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id EBBD17D4E for ; Wed, 13 Sep 2017 08:15:05 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP; 12 Sep 2017 23:15:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,386,1500966000"; d="scan'208";a="151247735" Received: from dpdk4.bj.intel.com ([172.16.182.85]) by fmsmga006.fm.intel.com with ESMTP; 12 Sep 2017 23:14:56 -0700 From: Wei Zhao To: dev@dpdk.org Cc: Wei Zhao Date: Wed, 13 Sep 2017 14:04:03 +0800 Message-Id: <1505282644-40415-2-git-send-email-wei.zhao1@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505282644-40415-1-git-send-email-wei.zhao1@intel.com> References: <20170824032602.107045-1-wei.zhao1@intel.com> <1505282644-40415-1-git-send-email-wei.zhao1@intel.com> Subject: [dpdk-dev] [PATCH v2 1/2] net/i40e: queue region set and flush X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This feature enable queue regions configuration for RSS in PF/VF, so that different traffic classes or different packet classification types can be separated to different queues in different queue regions.This patch can set queue region range, it include queue number in a region and the index of first queue. This patch enable mapping between different priorities (UP) and different traffic classes.It also enable mapping between a region index and a sepcific flowtype(PCTYPE).It also provide the solution of flush all configuration about queue region the above described. Signed-off-by: Wei Zhao --- drivers/net/i40e/i40e_ethdev.c | 19 +- drivers/net/i40e/i40e_ethdev.h | 30 ++ drivers/net/i40e/rte_pmd_i40e.c | 484 ++++++++++++++++++++++++++++++ drivers/net/i40e/rte_pmd_i40e.h | 38 +++ drivers/net/i40e/rte_pmd_i40e_version.map | 1 + 5 files changed, 568 insertions(+), 4 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 8e0580c..1663fc0 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -137,10 +137,6 @@ #define I40E_PRTTSYN_TSYNTYPE 0x0e000000 #define I40E_CYCLECOUNTER_MASK 0xffffffffffffffffULL -#define I40E_MAX_PERCENT 100 -#define I40E_DEFAULT_DCB_APP_NUM 1 -#define I40E_DEFAULT_DCB_APP_PRIO 3 - /** * Below are values for writing un-exposed registers suggested * by silicon experts @@ -1034,6 +1030,15 @@ i40e_init_fdir_filter_list(struct rte_eth_dev *dev) return ret; } +static void +i40e_init_queue_region_conf(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_queue_region_info *info = &pf->queue_region; + + memset(info, 0, sizeof(struct i40e_queue_region_info)); +} + static int eth_i40e_dev_init(struct rte_eth_dev *dev) { @@ -1309,6 +1314,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev) if (ret < 0) goto err_init_fdir_filter_list; + /* initialize queue region configuration */ + i40e_init_queue_region_conf(dev); + return 0; err_init_fdir_filter_list: @@ -1464,6 +1472,9 @@ eth_i40e_dev_uninit(struct rte_eth_dev *dev) /* Remove all Traffic Manager configuration */ i40e_tm_conf_uninit(dev); + /* Remove all the queue region configuration */ + i40e_flush_region_all_conf(hw, pf, 0); + return 0; } diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index ad80f0f..d612886 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -260,6 +260,16 @@ enum i40e_flxpld_layer_idx { #define I40E_QOS_BW_WEIGHT_MIN 1 /* The max bandwidth weight is 127. */ #define I40E_QOS_BW_WEIGHT_MAX 127 +/* The max queue region index is 7. */ +#define I40E_REGION_MAX_INDEX 7 +/* The max queue region userpriority is 7. */ +#define I40E_REGION_USERPRIORITY_MAX_INDEX 7 +/* The max pctype index is 63. */ +#define I40E_REGION_PCTYPE_MAX_INDEX 63 + +#define I40E_MAX_PERCENT 100 +#define I40E_DEFAULT_DCB_APP_NUM 1 +#define I40E_DEFAULT_DCB_APP_PRIO 3 /** * The overhead from MTU to max frame size. @@ -541,6 +551,22 @@ struct i40e_ethertype_rule { struct rte_hash *hash_table; }; +/* queue region info */ +struct i40e_region_info { + uint8_t region_id; + uint8_t queue_start_index; + uint8_t queue_num; + uint8_t user_priority_num; + uint8_t user_priority[I40E_MAX_USER_PRIORITY]; + uint8_t flowtype_num; + uint8_t hw_flowtype[I40E_FILTER_PCTYPE_MAX]; +}; + +struct i40e_queue_region_info { + uint16_t queue_region_number; + struct i40e_region_info region[I40E_REGION_MAX_INDEX + 1]; +}; + /* Tunnel filter number HW supports */ #define I40E_MAX_TUNNEL_FILTER_NUM 400 @@ -776,6 +802,7 @@ struct i40e_pf { struct i40e_fdir_info fdir; /* flow director info */ struct i40e_ethertype_rule ethertype; /* Ethertype filter rule */ struct i40e_tunnel_rule tunnel; /* Tunnel filter rule */ + struct i40e_queue_region_info queue_region; /* queue region info */ struct i40e_fc_conf fc_conf; /* Flow control conf */ struct i40e_mirror_rule_list mirror_list; uint16_t nb_mirror_rule; /* The number of mirror rules */ @@ -1003,6 +1030,9 @@ void i40e_check_write_reg(struct i40e_hw *hw, uint32_t addr, uint32_t val); int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops); void i40e_tm_conf_init(struct rte_eth_dev *dev); void i40e_tm_conf_uninit(struct rte_eth_dev *dev); +int i40e_flush_region_all_conf(struct i40e_hw *hw, + struct i40e_pf *pf, uint16_t on); + #define I40E_DEV_TO_PCI(eth_dev) \ RTE_DEV_TO_PCI((eth_dev)->device) diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c index d69a472..1e97200 100644 --- a/drivers/net/i40e/rte_pmd_i40e.c +++ b/drivers/net/i40e/rte_pmd_i40e.c @@ -35,6 +35,7 @@ #include #include "base/i40e_prototype.h" +#include "base/i40e_dcb.h" #include "i40e_ethdev.h" #include "i40e_pf.h" #include "i40e_rxtx.h" @@ -2161,3 +2162,486 @@ rte_pmd_i40e_add_vf_mac_addr(uint8_t port, uint16_t vf_id, return 0; } + +static int +i40e_vsi_update_queue_region_mapping(struct i40e_hw *hw, + struct i40e_pf *pf) +{ + uint16_t i; + struct i40e_vsi *vsi = pf->main_vsi; + uint16_t queue_offset, bsf, tc_index; + struct i40e_vsi_context ctxt; + struct i40e_aqc_vsi_properties_data *vsi_info; + struct i40e_queue_region_info *region_info = + &pf->queue_region; + uint32_t ret = -EINVAL; + + if (!region_info->queue_region_number) { + PMD_INIT_LOG(ERR, "there is no that region id been set before"); + return ret; + } + + memset(&ctxt, 0, sizeof(struct i40e_vsi_context)); + + /* Update Queue Pairs Mapping for currently enabled UPs */ + ctxt.seid = vsi->seid; + ctxt.pf_num = hw->pf_id; + ctxt.vf_num = 0; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.info = vsi->info; + vsi_info = &ctxt.info; + + memset(vsi_info->tc_mapping, 0, sizeof(uint16_t) * 8); + memset(vsi_info->queue_mapping, 0, sizeof(uint16_t) * 16); + + /** + * Configure queue region and queue mapping parameters, + * for enabled queue region, allocate queues to this region. + */ + + for (i = 0; i < region_info->queue_region_number; i++) { + tc_index = region_info->region[i].region_id; + bsf = rte_bsf32(region_info->region[i].queue_num); + queue_offset = region_info->region[i].queue_start_index; + vsi_info->tc_mapping[tc_index] = rte_cpu_to_le_16( + (queue_offset << I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) | + (bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)); + } + + /* Associate queue number with VSI, Keep vsi->nb_qps unchanged */ + if (vsi->type == I40E_VSI_SRIOV) { + vsi_info->mapping_flags |= + rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG); + for (i = 0; i < vsi->nb_qps; i++) + vsi_info->queue_mapping[i] = + rte_cpu_to_le_16(vsi->base_queue + i); + } else { + vsi_info->mapping_flags |= + rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG); + vsi_info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue); + } + vsi_info->valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID); + + /* Update the VSI after updating the VSI queue-mapping information */ + ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to configure queue region " + "mapping = %d ", hw->aq.asq_last_status); + return ret; + } + /* update the local VSI info with updated queue map */ + (void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping, + sizeof(vsi->info.tc_mapping)); + (void)rte_memcpy(&vsi->info.queue_mapping, + &ctxt.info.queue_mapping, + sizeof(vsi->info.queue_mapping)); + vsi->info.mapping_flags = ctxt.info.mapping_flags; + vsi->info.valid_sections = 0; + + return 0; +} + + +static int +i40e_set_queue_region(struct i40e_pf *pf, + struct rte_i40e_rss_region_conf *conf_ptr) +{ + uint16_t tc_size_tb[7] = {1, 2, 4, 8, 16, 32, 64}; + uint16_t i; + struct i40e_vsi *main_vsi = pf->main_vsi; + struct i40e_queue_region_info *info = &pf->queue_region; + uint32_t ret = -EINVAL; + + for (i = 0; i < I40E_REGION_MAX_INDEX; i++) + if (conf_ptr->queue_num == tc_size_tb[i]) + break; + + if (i == I40E_REGION_MAX_INDEX) { + printf("The region sizes should be any of the following " + "values: 1, 2, 4, 8, 16, 32, 64 as long as the " + "total number of queues do not exceed the VSI allocation"); + return ret; + } + + if (conf_ptr->region_id >= I40E_REGION_MAX_INDEX) { + PMD_INIT_LOG(ERR, "the queue region max index is 7"); + return ret; + } + + if ((conf_ptr->queue_start_index + conf_ptr->queue_num) + > main_vsi->nb_used_qps) { + PMD_INIT_LOG(ERR, "the queue index exceeds the VSI range"); + return ret; + } + + if (info->queue_region_number) { + for (i = 0; i < info->queue_region_number; i++) + if (conf_ptr->region_id == info->region[i].region_id) + break; + + if ((i == info->queue_region_number) && + (i <= I40E_REGION_MAX_INDEX)) { + info->region[i].region_id = conf_ptr->region_id; + info->region[i].queue_num = conf_ptr->queue_num; + info->region[i].queue_start_index = + conf_ptr->queue_start_index; + info->queue_region_number++; + } else if (i > I40E_REGION_MAX_INDEX) { + PMD_INIT_LOG(ERR, "queue region number " + "exceeds maxnum 8"); + return ret; + } else { + PMD_INIT_LOG(ERR, "queue region id has " + "been set before"); + return ret; + } + } else { + info->region[0].region_id = conf_ptr->region_id; + info->region[0].queue_num = conf_ptr->queue_num; + info->region[0].queue_start_index = + conf_ptr->queue_start_index; + info->queue_region_number++; + } + + return 0; +} + +static int +i40e_set_region_flowtype_pf(struct i40e_hw *hw, + struct i40e_pf *pf, struct rte_i40e_rss_region_conf *conf_ptr) +{ + uint32_t pfqf_hregion; + uint32_t ret = -EINVAL; + struct i40e_queue_region_info *info = &pf->queue_region; + uint16_t i, j, index, flowtype_set = 0; + uint16_t region_index, flowtype_index; + + if (conf_ptr->region_id > I40E_PFQF_HREGION_MAX_INDEX) { + PMD_INIT_LOG(ERR, "the queue region max index is 7"); + return ret; + } + + if (conf_ptr->hw_flowtype >= I40E_FILTER_PCTYPE_MAX) { + PMD_INIT_LOG(ERR, "the hw_flowtype or PCTYPE max index is 63"); + return ret; + } + + if (info->queue_region_number) { + for (i = 0; i < info->queue_region_number; i++) + if (conf_ptr->region_id == info->region[i].region_id) + break; + + if (i == info->queue_region_number) { + PMD_INIT_LOG(ERR, "that region id has not " + "been set before"); + return ret; + } + region_index = i; + + for (i = 0; i < info->queue_region_number; i++) { + for (j = 0; j < info->region[i].flowtype_num; j++) { + if (conf_ptr->hw_flowtype == + info->region[i].hw_flowtype[j]) { + flowtype_set = 1; + break; + } + } + } + + if (flowtype_set) { + PMD_INIT_LOG(ERR, "that hw_flowtype " + "has been set before"); + return ret; + } + flowtype_index = info->region[region_index].flowtype_num; + info->region[region_index].hw_flowtype[flowtype_index] = + conf_ptr->hw_flowtype; + info->region[region_index].flowtype_num++; + } else { + PMD_INIT_LOG(ERR, "there is no that region id been set before"); + return ret; + } + + index = conf_ptr->hw_flowtype >> 3; + pfqf_hregion = i40e_read_rx_ctl(hw, I40E_PFQF_HREGION(index)); + + if ((conf_ptr->hw_flowtype & 0x7) == 0) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_0_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_0_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 1) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_1_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_1_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 2) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_2_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_2_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 3) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_3_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_3_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 4) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_4_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_4_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 5) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_5_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_5_SHIFT; + } else if ((conf_ptr->hw_flowtype & 0x7) == 6) { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_6_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_6_SHIFT; + } else { + pfqf_hregion |= conf_ptr->region_id << + I40E_PFQF_HREGION_REGION_7_SHIFT; + pfqf_hregion |= 1 << I40E_PFQF_HREGION_OVERRIDE_ENA_7_SHIFT; + } + + i40e_write_rx_ctl(hw, I40E_PFQF_HREGION(index), pfqf_hregion); + + return 0; +} + +static int +i40e_set_up_region(struct i40e_pf *pf, + struct rte_i40e_rss_region_conf *conf_ptr) +{ + struct i40e_queue_region_info *info = &pf->queue_region; + uint32_t ret = -EINVAL; + uint16_t i, j, region_index, up_set = 0; + + if (conf_ptr->user_priority > I40E_REGION_USERPRIORITY_MAX_INDEX) { + PMD_INIT_LOG(ERR, "the queue region max index is 7"); + return ret; + } + + if (conf_ptr->region_id >= I40E_REGION_MAX_INDEX) { + PMD_INIT_LOG(ERR, "the region_id max index is 7"); + return ret; + } + + if (info->queue_region_number) { + for (i = 0; i < info->queue_region_number; i++) + if (conf_ptr->region_id == info->region[i].region_id) + break; + + if (i == info->queue_region_number) { + PMD_INIT_LOG(ERR, "that region id " + "has not been set before"); + return ret; + } + region_index = i; + + for (i = 0; i < info->queue_region_number; i++) { + for (j = 0; j < + info->region[i].user_priority_num; j++) { + if (info->region[i].user_priority[j] == + conf_ptr->user_priority) { + up_set = 1; + break; + } + } + } + + if (up_set) { + PMD_INIT_LOG(ERR, "that user priority " + "has been set before"); + return ret; + } + + j = info->region[region_index].user_priority_num; + info->region[region_index].user_priority[j] = + conf_ptr->user_priority; + info->region[region_index].user_priority_num++; + } else { + PMD_INIT_LOG(ERR, "there is no that region id been set before"); + return ret; + } + + return 0; +} + +static int +i40e_queue_region_dcb_configure(struct i40e_hw *hw, + struct i40e_pf *pf) +{ + struct i40e_dcbx_config dcb_cfg_local; + struct i40e_dcbx_config *dcb_cfg; + struct i40e_queue_region_info *info = &pf->queue_region; + struct i40e_dcbx_config *old_cfg = &hw->local_dcbx_config; + uint32_t ret = -EINVAL; + uint16_t i, j, prio_index, region_index; + uint8_t tc_map, tc_bw, bw_lf; + + if (!info->queue_region_number) { + PMD_INIT_LOG(ERR, "there is no that region id been set before"); + return ret; + } + + dcb_cfg = &dcb_cfg_local; + memset(dcb_cfg, 0, sizeof(struct i40e_dcbx_config)); + + /* assume each tc has the same bw */ + tc_bw = I40E_MAX_PERCENT / info->queue_region_number; + for (i = 0; i < info->queue_region_number; i++) + dcb_cfg->etscfg.tcbwtable[i] = tc_bw; + /* to ensure the sum of tcbw is equal to 100 */ + bw_lf = I40E_MAX_PERCENT % info->queue_region_number; + for (i = 0; i < bw_lf; i++) + dcb_cfg->etscfg.tcbwtable[i]++; + + /* assume each tc has the same Transmission Selection Algorithm */ + for (i = 0; i < info->queue_region_number; i++) + dcb_cfg->etscfg.tsatable[i] = I40E_IEEE_TSA_ETS; + + for (i = 0; i < info->queue_region_number; i++) { + for (j = 0; j < info->region[i].user_priority_num; j++) { + prio_index = info->region[i].user_priority[j]; + region_index = info->region[i].region_id; + dcb_cfg->etscfg.prioritytable[prio_index] = + region_index; + } + } + + /* FW needs one App to configure HW */ + dcb_cfg->numapps = I40E_DEFAULT_DCB_APP_NUM; + dcb_cfg->app[0].selector = I40E_APP_SEL_ETHTYPE; + dcb_cfg->app[0].priority = I40E_DEFAULT_DCB_APP_PRIO; + dcb_cfg->app[0].protocolid = I40E_APP_PROTOID_FCOE; + + tc_map = RTE_LEN2MASK(info->queue_region_number, uint8_t); + + dcb_cfg->pfc.willing = 0; + dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; + dcb_cfg->pfc.pfcenable = tc_map; + + /* Copy the new config to the current config */ + *old_cfg = *dcb_cfg; + old_cfg->etsrec = old_cfg->etscfg; + ret = i40e_set_dcb_config(hw); + + if (ret) { + PMD_INIT_LOG(ERR, "Set queue region DCB Config failed, err" + " %s aq_err %s", + i40e_stat_str(hw, ret), + i40e_aq_str(hw, hw->aq.asq_last_status)); + return ret; + } + + return 0; +} + +static int +i40e_queue_region_dcb_configure_default(struct i40e_hw *hw) +{ + uint16_t i; + uint32_t ret = -EINVAL; + + memset(&hw->local_dcbx_config, 0, + sizeof(struct i40e_dcbx_config)); + /* set dcb default configuration */ + hw->local_dcbx_config.etscfg.willing = 0; + hw->local_dcbx_config.etscfg.maxtcs = 0; + hw->local_dcbx_config.etscfg.tcbwtable[0] = 100; + hw->local_dcbx_config.etscfg.tsatable[0] = + I40E_IEEE_TSA_ETS; + /* all UPs mapping to region 0 */ + for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) + hw->local_dcbx_config.etscfg.prioritytable[i] = 0; + hw->local_dcbx_config.etsrec = + hw->local_dcbx_config.etscfg; + hw->local_dcbx_config.pfc.willing = 0; + hw->local_dcbx_config.pfc.pfccap = + I40E_MAX_TRAFFIC_CLASS; + /* FW needs one App to configure HW */ + hw->local_dcbx_config.numapps = 1; + hw->local_dcbx_config.app[0].selector = + I40E_APP_SEL_ETHTYPE; + hw->local_dcbx_config.app[0].priority = 3; + hw->local_dcbx_config.app[0].protocolid = + I40E_APP_PROTOID_FCOE; + ret = i40e_set_dcb_config(hw); + if (ret) { + PMD_INIT_LOG(ERR, + "default dcb config fails. err = %d, aq_err = %d.", + ret, hw->aq.asq_last_status); + return ret; + } + + return 0; +} + +int +i40e_flush_region_all_conf(struct i40e_hw *hw, struct i40e_pf *pf, + uint16_t on) +{ + uint16_t i; + struct i40e_queue_region_info *info = &pf->queue_region; + + if (on) { + i40e_vsi_update_queue_region_mapping(hw, pf); + i40e_queue_region_dcb_configure(hw, pf); + return 0; + } + + info->queue_region_number = 1; + info->region[0].queue_num = 64; + info->region[0].queue_start_index = 0; + + i40e_vsi_update_queue_region_mapping(hw, pf); + i40e_queue_region_dcb_configure_default(hw); + + for (i = 0; i < I40E_PFQF_HREGION_MAX_INDEX; i++) + i40e_write_rx_ctl(hw, I40E_PFQF_HREGION(i), 0); + + memset(info, 0, sizeof(struct i40e_queue_region_info)); + + return 0; +} + +int rte_pmd_i40e_queue_region_conf(uint8_t port, + struct rte_i40e_rss_region_conf *conf_ptr) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port]; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + enum rte_pmd_i40e_queue_region_op op_type = conf_ptr->op; + uint32_t ret; + + if (!is_i40e_supported(dev)) + return -ENOTSUP; + + switch (op_type) { + case RTE_PMD_I40E_QUEUE_REGION_SET: + ret = i40e_set_queue_region(pf, conf_ptr); + break; + case RTE_PMD_I40E_REGION_FLOWTYPE_PF_SET: + ret = i40e_set_region_flowtype_pf(hw, pf, conf_ptr); + break; + case RTE_PMD_I40E_REGION_FLOWTYPE_VF_SET: + ret = -EINVAL; + break; + case RTE_PMD_I40E_UP_REGION_SET: + ret = i40e_set_up_region(pf, conf_ptr); + break; + case RTE_PMD_I40E_REGION_ALL_FLUSH_ON: + ret = i40e_flush_region_all_conf(hw, pf, 1); + break; + case RTE_PMD_I40E_REGION_ALL_FLUSH_OFF: + ret = i40e_flush_region_all_conf(hw, pf, 0); + break; + + default: + PMD_DRV_LOG(WARNING, "op type (%d) not supported", + op_type); + ret = -EINVAL; + break; + } + + I40E_WRITE_FLUSH(hw); + + return ret; +} diff --git a/drivers/net/i40e/rte_pmd_i40e.h b/drivers/net/i40e/rte_pmd_i40e.h index 155b7e8..a1329f4 100644 --- a/drivers/net/i40e/rte_pmd_i40e.h +++ b/drivers/net/i40e/rte_pmd_i40e.h @@ -91,6 +91,20 @@ enum rte_pmd_i40e_package_info { RTE_PMD_I40E_PKG_INFO_MAX = 0xFFFFFFFF }; +/** + * Option types of queue region. + */ +enum rte_pmd_i40e_queue_region_op { + RTE_PMD_I40E_REGION_UNDEFINED = 0, + RTE_PMD_I40E_QUEUE_REGION_SET, /**< add queue region set*/ + RTE_PMD_I40E_REGION_FLOWTYPE_PF_SET, /**< add pf region pctype set */ + RTE_PMD_I40E_REGION_FLOWTYPE_VF_SET, /**< add vf region pctype set */ + RTE_PMD_I40E_UP_REGION_SET, /**< add queue region pctype set */ + RTE_PMD_I40E_REGION_ALL_FLUSH_ON, /**< flush on all configuration */ + RTE_PMD_I40E_REGION_ALL_FLUSH_OFF, /**< flush off all configuration */ + RTE_PMD_I40E_QUEUE_REGION_OP_MAX +}; + #define RTE_PMD_I40E_DDP_NAME_SIZE 32 /** @@ -146,6 +160,18 @@ struct rte_pmd_i40e_ptype_mapping { }; /** + * Queue region information get from CLI. + */ +struct rte_i40e_rss_region_conf { + uint8_t region_id; + uint8_t hw_flowtype; + uint8_t queue_start_index; + uint8_t queue_num; + uint8_t user_priority; + enum rte_pmd_i40e_queue_region_op op; +}; + +/** * Notify VF when PF link status changes. * * @param port @@ -657,4 +683,16 @@ int rte_pmd_i40e_ptype_mapping_replace(uint8_t port, int rte_pmd_i40e_add_vf_mac_addr(uint8_t port, uint16_t vf_id, struct ether_addr *mac_addr); +/** + * Get RSS queue region info from CLI and do configuration for + * that port as the command otion type + * + * @param port + * pointer to port identifier of the device + * @param conf_ptr + * pointer to the struct that contain all the + * region configuration parameters + */ +int rte_pmd_i40e_queue_region_conf(uint8_t port, + struct rte_i40e_rss_region_conf *conf_ptr); #endif /* _PMD_I40E_H_ */ diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map index ef8882b..c3ee2da 100644 --- a/drivers/net/i40e/rte_pmd_i40e_version.map +++ b/drivers/net/i40e/rte_pmd_i40e_version.map @@ -50,5 +50,6 @@ DPDK_17.11 { global: rte_pmd_i40e_add_vf_mac_addr; + rte_pmd_i40e_queue_region_conf; } DPDK_17.08;