Bug 338 - IP Reassembly with more 4 packets Segfault
Summary: IP Reassembly with more 4 packets Segfault
Status: UNCONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: core (show other bugs)
Version: 17.11
Hardware: x86 Linux
: Normal normal
Target Milestone: ---
Assignee: Abhijeet
URL:
Depends on:
Blocks:
 
Reported: 2019-08-13 05:05 CEST by Abhijeet
Modified: 2019-08-15 16:44 CEST (History)
2 users (show)



Attachments

Description Abhijeet 2019-08-13 05:05:17 CEST
I have looked at previous mails on this mailing list and also elsewhere on Google and could not find any information related to this.

Whenever I have to reassemble a valid IP packet with more than 4 fragments, I see a crash. Stack trace below. I assume the number 4 comes from RTE_LIBRTE_IP_FRAG_MAX_FRAG.

To trigger this, I sent a fragmented IP packet via - ping <DPDK IP addr> -s 6000

(gdb) bt
#0  ip_frag_lookup (tbl=tbl@entry=0x7fff7a32ce80, key=key@entry=0x7ffff6eeee10, tms=tms@entry=2602613353715115, free=free@entry=0x7ffff6eeedb8, 
    stale=stale@entry=0x7ffff6eeedc0) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:379
#1  0x00007ffff7c021f6 in ip_frag_find (tbl=tbl@entry=0x7fff7a32ce80, dr=dr@entry=0x7fff7a32c900, key=key@entry=0x7ffff6eeee10, tms=2602613353715115)
    at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:286
#2  0x00007ffff7c00280 in rte_ipv4_frag_reassemble_packet (tbl=0x7fff7a32ce80, dr=0x7fff7a32c900, mb=0x7fff8b71b480, tms=<optimized out>, 
    ip_hdr=<optimized out>) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/rte_ipv4_reassembly.c:160

(gdb) f 0
#0  ip_frag_lookup (tbl=tbl@entry=0x7fff7a32ce80, key=key@entry=0x7ffff6eeee10, tms=tms@entry=2602613353715115, free=free@entry=0x7ffff6eeedb8, 
    stale=stale@entry=0x7ffff6eeedc0) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:379
379	if (ip_frag_key_cmp(key, &p1[i].key) == 0)

(gdb) f 1
#1  0x00007ffff7c021f6 in ip_frag_find (tbl=tbl@entry=0x7fff7a32ce80, dr=dr@entry=0x7fff7a32c900, key=key@entry=0x7ffff6eeee10, tms=2602613353715115)
    at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:286
286	if ((pkt = ip_frag_lookup(tbl, key, tms, &free, &stale)) == NULL) {

(gdb) f 2
#2  0x00007ffff7c00280 in rte_ipv4_frag_reassemble_packet (tbl=0x7fff7a32ce80, dr=0x7fff7a32c900, mb=0x7fff8b71b480, tms=<optimized out>, 
    ip_hdr=<optimized out>) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/rte_ipv4_reassembly.c:160
160	if ((fp = ip_frag_find(tbl, dr, &key, tms)) == NULL) {

Is this a known issue? Are there any workaround?
Comment 1 Abhijeet 2019-08-13 05:06:43 CEST
> I have looked at previous mails on this mailing list and also elsewhere on
> Google > and could not find any information related to this.

I meant the users@dpdk.org mailing list.
Comment 2 Abhijeet 2019-08-13 05:22:16 CEST
(gdb) p *key
$1 = {src_dst = {8653288496738183178, 140737306069544, 0, 140737306069312}, id = 22534, key_len = 1}

(gdb) p *tbl
$4 = {max_cycles = 140735532901248, entry_mask = 2339506112, max_entries = 32767, use_entries = 2339501375, bucket_entries = 32767, nb_entries = 2339494272, 
  nb_buckets = 32767, last = 0x7fff8b71bdc0, lru = {tqh_first = 0x7fff7bacb700, tqh_last = 0x7fff7aed8f00}, stat = {find_num = 0, add_num = 0, del_num = 0, 
    reuse_num = 0, fail_total = 0, fail_nospace = 0}, pkt = 0x7fff7a32cf00}
Comment 3 Abhijeet 2019-08-13 07:01:07 CEST
I had been making a mistake. My code was of format -

if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header))
{
  rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet();
  if (assembled_msg != nullptr)
  {
    rte_ip_frag_free_death_row();
  }
}

The rte_ip_frag_free_death_row was NOT getting called when reassembly was
failing, as it was in my case of > 4 IP fragments. The fix is to call
rte_ip_frag_free_death_row irrespective of reassembly status.

Apparently, this is the resultant mode of failure in this scenario. Please confirm and close this bug if nothing has to be done.
Comment 4 Konstantin Ananyev 2019-08-13 10:31:44 CEST
If you need to handle more then 4 fragments per packet, you'll need to increase CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG value in your config file and rebuild dpdk.
Let say change it to 8 or so.
Comment 5 Ajit Khaparde 2019-08-15 02:48:21 CEST
Abhijeet, Can you try Konstantin's suggestion and update here? Thanks
Comment 6 Abhijeet 2019-08-15 07:56:08 CEST
4 fragments are good for my requirement. My issue was that greater than CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG fragments should be handled gracefully by dropping the fragments instead of seg faulting.

As reported in comment 3, this issue was caused due to inappropriate use of the API.

Once I changed the code flow from -

if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header))
{
  rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet();
  if (assembled_msg != nullptr)
  {
    rte_ip_frag_free_death_row();
  }
}

to -

if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header))
{
  rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet();
  if (assembled_msg != nullptr)
  {
    // do something
  }

  rte_ip_frag_free_death_row();
}

DPDK handles greater than CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG fragments correctly by dropping them gracefully.
Comment 7 Ajit Khaparde 2019-08-15 15:25:57 CEST
Are we ok to close this then?
Comment 8 Abhijeet 2019-08-15 16:44:24 CEST
This may be closed, provided the seg fault mode of failure on using the API in the incorrect way is acceptable.

Note You need to log in before you can comment on or make changes to this bug.