I have looked at previous mails on this mailing list and also elsewhere on Google and could not find any information related to this. Whenever I have to reassemble a valid IP packet with more than 4 fragments, I see a crash. Stack trace below. I assume the number 4 comes from RTE_LIBRTE_IP_FRAG_MAX_FRAG. To trigger this, I sent a fragmented IP packet via - ping <DPDK IP addr> -s 6000 (gdb) bt #0 ip_frag_lookup (tbl=tbl@entry=0x7fff7a32ce80, key=key@entry=0x7ffff6eeee10, tms=tms@entry=2602613353715115, free=free@entry=0x7ffff6eeedb8, stale=stale@entry=0x7ffff6eeedc0) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:379 #1 0x00007ffff7c021f6 in ip_frag_find (tbl=tbl@entry=0x7fff7a32ce80, dr=dr@entry=0x7fff7a32c900, key=key@entry=0x7ffff6eeee10, tms=2602613353715115) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:286 #2 0x00007ffff7c00280 in rte_ipv4_frag_reassemble_packet (tbl=0x7fff7a32ce80, dr=0x7fff7a32c900, mb=0x7fff8b71b480, tms=<optimized out>, ip_hdr=<optimized out>) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/rte_ipv4_reassembly.c:160 (gdb) f 0 #0 ip_frag_lookup (tbl=tbl@entry=0x7fff7a32ce80, key=key@entry=0x7ffff6eeee10, tms=tms@entry=2602613353715115, free=free@entry=0x7ffff6eeedb8, stale=stale@entry=0x7ffff6eeedc0) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:379 379 if (ip_frag_key_cmp(key, &p1[i].key) == 0) (gdb) f 1 #1 0x00007ffff7c021f6 in ip_frag_find (tbl=tbl@entry=0x7fff7a32ce80, dr=dr@entry=0x7fff7a32c900, key=key@entry=0x7ffff6eeee10, tms=2602613353715115) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/ip_frag_internal.c:286 286 if ((pkt = ip_frag_lookup(tbl, key, tms, &free, &stale)) == NULL) { (gdb) f 2 #2 0x00007ffff7c00280 in rte_ipv4_frag_reassemble_packet (tbl=0x7fff7a32ce80, dr=0x7fff7a32c900, mb=0x7fff8b71b480, tms=<optimized out>, ip_hdr=<optimized out>) at /usr/src/debug/dpdk-17.11.2-6.fc30.x86_64/lib/librte_ip_frag/rte_ipv4_reassembly.c:160 160 if ((fp = ip_frag_find(tbl, dr, &key, tms)) == NULL) { Is this a known issue? Are there any workaround?
> I have looked at previous mails on this mailing list and also elsewhere on > Google > and could not find any information related to this. I meant the users@dpdk.org mailing list.
(gdb) p *key $1 = {src_dst = {8653288496738183178, 140737306069544, 0, 140737306069312}, id = 22534, key_len = 1} (gdb) p *tbl $4 = {max_cycles = 140735532901248, entry_mask = 2339506112, max_entries = 32767, use_entries = 2339501375, bucket_entries = 32767, nb_entries = 2339494272, nb_buckets = 32767, last = 0x7fff8b71bdc0, lru = {tqh_first = 0x7fff7bacb700, tqh_last = 0x7fff7aed8f00}, stat = {find_num = 0, add_num = 0, del_num = 0, reuse_num = 0, fail_total = 0, fail_nospace = 0}, pkt = 0x7fff7a32cf00}
I had been making a mistake. My code was of format - if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header)) { rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet(); if (assembled_msg != nullptr) { rte_ip_frag_free_death_row(); } } The rte_ip_frag_free_death_row was NOT getting called when reassembly was failing, as it was in my case of > 4 IP fragments. The fix is to call rte_ip_frag_free_death_row irrespective of reassembly status. Apparently, this is the resultant mode of failure in this scenario. Please confirm and close this bug if nothing has to be done.
If you need to handle more then 4 fragments per packet, you'll need to increase CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG value in your config file and rebuild dpdk. Let say change it to 8 or so.
Abhijeet, Can you try Konstantin's suggestion and update here? Thanks
4 fragments are good for my requirement. My issue was that greater than CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG fragments should be handled gracefully by dropping the fragments instead of seg faulting. As reported in comment 3, this issue was caused due to inappropriate use of the API. Once I changed the code flow from - if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header)) { rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet(); if (assembled_msg != nullptr) { rte_ip_frag_free_death_row(); } } to - if (rte_ipv4_frag_pkt_is_fragmented(ipv4_header)) { rte_mbuf* assembled_msg = rte_ipv4_frag_reassemble_packet(); if (assembled_msg != nullptr) { // do something } rte_ip_frag_free_death_row(); } DPDK handles greater than CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG fragments correctly by dropping them gracefully.
Are we ok to close this then?
This may be closed, provided the seg fault mode of failure on using the API in the incorrect way is acceptable.