[dpdk-dev] [PATCH v3 6/9] eal: register non-EAL threads as lcores
David Marchand
david.marchand at redhat.com
Tue Jun 23 09:49:18 CEST 2020
Hello Konstantin,
On Mon, Jun 22, 2020 at 5:49 PM Ananyev, Konstantin
<konstantin.ananyev at intel.com> wrote:
> > diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
> > index 86d32a3dd7..7db05428e7 100644
> > --- a/lib/librte_eal/common/eal_common_lcore.c
> > +++ b/lib/librte_eal/common/eal_common_lcore.c
> > @@ -220,3 +221,38 @@ rte_socket_id_by_idx(unsigned int idx)
> > }
> > return config->numa_nodes[idx];
> > }
> > +
> > +static rte_spinlock_t lcore_lock = RTE_SPINLOCK_INITIALIZER;
> > +
> > +unsigned int
> > +eal_lcore_non_eal_allocate(void)
> > +{
> > + struct rte_config *cfg = rte_eal_get_configuration();
> > + unsigned int lcore_id;
> > +
> > + rte_spinlock_lock(&lcore_lock);
>
> I think it will break current DPDK MP modes.
> The problem here - rte_config (and lcore_role[]) is in shared memory,
> while the lock is local.
> Simplest way probably to move lcore_lock to rte_config.
Even before this series, MP has no protection on lcore placing between
primary and secondary processes.
Personally, I have no use for DPDK MP and marking MP as not supporting
this new feature is tempting for a first phase.
If this is a strong requirement, I can look at it in a second phase.
What do you think?
>
> > + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
> > + if (cfg->lcore_role[lcore_id] != ROLE_OFF)
> > + continue;
> > + cfg->lcore_role[lcore_id] = ROLE_NON_EAL;
> > + cfg->lcore_count++;
> > + break;
> > + }
> > + if (lcore_id == RTE_MAX_LCORE)
> > + RTE_LOG(DEBUG, EAL, "No lcore available.\n");
> > + rte_spinlock_unlock(&lcore_lock);
> > + return lcore_id;
> > +}
> > +
> > +void
> > +eal_lcore_non_eal_release(unsigned int lcore_id)
> > +{
> > + struct rte_config *cfg = rte_eal_get_configuration();
> > +
> > + rte_spinlock_lock(&lcore_lock);
> > + if (cfg->lcore_role[lcore_id] == ROLE_NON_EAL) {
> > + cfg->lcore_role[lcore_id] = ROLE_OFF;
> > + cfg->lcore_count--;
> > + }
> > + rte_spinlock_unlock(&lcore_lock);
> > +}
> > diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
> > index a7ae0691bf..1cbddc4b5b 100644
> > --- a/lib/librte_eal/common/eal_common_thread.c
> > +++ b/lib/librte_eal/common/eal_common_thread.c
> > @@ -236,3 +236,36 @@ rte_ctrl_thread_create(pthread_t *thread, const char *name,
> > pthread_join(*thread, NULL);
> > return -ret;
> > }
> > +
> > +void
> > +rte_thread_register(void)
> > +{
> > + unsigned int lcore_id;
> > + rte_cpuset_t cpuset;
> > +
> > + /* EAL init flushes all lcores, we can't register before. */
> > + assert(internal_config.init_complete == 1);
> > + if (pthread_getaffinity_np(pthread_self(), sizeof(cpuset),
> > + &cpuset) != 0)
> > + CPU_ZERO(&cpuset);
> > + lcore_id = eal_lcore_non_eal_allocate();
> > + if (lcore_id >= RTE_MAX_LCORE)
> > + lcore_id = LCORE_ID_ANY;
> > + rte_thread_init(lcore_id, &cpuset);
>
> So we just setting affinity to the same value, right?
> Not a big deal, but might be easier to allow rte_thread_init()
> to accept cpuset==NULL (and just don't change thread affinity in that case)
rte_thread_init does not change the thread cpu affinity, it handles
per thread (TLS included) variables initialization.
So do you mean accepting cpuset == NULL and do the getaffinity in this case?
rte_thread_init is EAL private for now.
That saves us some code in this function, but we will call with a !=
NULL cpuset in all other EAL code.
--
David Marchand
More information about the dev
mailing list