[dpdk-dev] using hash table in a MP environment

Helmut Sim simhelmut at gmail.com
Wed Jun 11 07:23:34 CEST 2014


one more simple way would be to assign the desired hash function to the
hash_func in the rte_hash structure returned by rte_hash_find_existing call
at the secondary initialization phase. that way there is no difference
between a primary or a secondary process.

Regards,



On Tue, Jun 10, 2014 at 3:25 PM, Venkat Thummala <
venkat.thummala.1978 at gmail.com> wrote:

> Hi Shirley,
>
> Please refer the section 20.3 [Multi-Process Limitations] in DPDK
> Programmers Guide.
>
> The use of function pointers between multiple processes running based of
> different
> compiled binaries is not supported, since the location of a given function
> in one
> process may be different to its location in a second. This prevents the
> librte_hash library from behaving properly as in a multi-threaded instance,
> since it uses a pointer to the hash function internally.
> To work around this issue, it is recommended that multi-process
> applications
> perform the hash calculations by directly calling the hashing function from
> the code
> and then using the rte_hash_add_with_hash()/
> rte_hash_lookup_with_hash() functions instead of the functions which do the
> hashing internally, such as rte_hash_add()/rte_hash_lookup()
>
> Thanks
> Venkat
>
>
> On 10 June 2014 17:05, Neil Horman <nhorman at tuxdriver.com> wrote:
>
> > On Tue, Jun 10, 2014 at 11:02:03AM +0300, Uri Sidler wrote:
> > > Hi,
> > > I am currently using a hash table in a multi-process environment.
> > > the master process creates the hash table which is later used by other
> > > secondary processes.
> > > but the secondary processes fail to use the hash table since the hash
> > > function address actually points to a different fucntion. (this makes
> > sense
> > > since the address of the hash function is in fact different per
> process).
> > > How can I solve this issue?
> > >
> > > Thanks,
> > > Shirley.
> > >
> >
> > Use shared memory.  see shmget
> >
> > Neil
> >
> >
>


More information about the dev mailing list