Re: [U-Boot] [PATCH 1/3] net: defragment IP packets

On Thu 30 Jul 2009 05:02, Alessandro Rubini pondered:
The defragmenting code is enabled by CONFIG_IP_DEFRAG. The code is useful for TFTP transfers, so the static reassembly buffer is sized based on CONFIG_TFTP_MAXBLOCK (default is 16kB).
The packet buffer is used as an array of "hole" structures, acting as a double-linked list. Each new fragment can split a hole in two, reduce a hole or fill a hole. No support is there for a fragment overlapping two diffrent holes (i.e., thre new fragment is across an already-received fragment).
The code includes a number of suggestions by Robin Getz.
Signed-off-by: Alessandro Rubini rubini@gnudd.com
net/net.c | 172 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 167 insertions(+), 5 deletions(-)
diff --git a/net/net.c b/net/net.c index 641c37c..be382dd 100644 --- a/net/net.c +++ b/net/net.c @@ -1117,6 +1117,164 @@ static void CDPStart(void) } #endif
+#ifdef CONFIG_IP_DEFRAG +/*
- This function collects fragments in a single packet, according
- to the algorithm in RFC815. It returns NULL or the pointer to
- a complete packet, in static storage
- */
+#ifndef CONFIG_TFTP_MAXBLOCK +#define CONFIG_TFTP_MAXBLOCK 16384
It is more than tftp - nfs could also use the same.
How about CONFIG_NET_MAXDEFRAG instead?
+#endif +#define IP_PAYLOAD (CONFIG_TFTP_MAXBLOCK + 4) +#define IP_PKTSIZE (IP_PAYLOAD + IP_HDR_SIZE_NO_UDP)
+/*
- this is the packet being assembled, either data or frag control.
- Fragments go by 8 bytes, so this union must be 8 bytes long
- */
+struct hole {
- /* first_byte is address of this structure */
- u16 last_byte; /* last byte in this hole + 1 (begin of next hole) */
- u16 next_hole; /* index of next (in 8-b blocks), 0 == none */
- u16 prev_hole; /* index of prev, 0 == none */
- u16 unused;
+};
+static IP_t *__NetDefragment(IP_t *ip, int *lenp) +{
I don't understand the purpose of the lenp.
The calling function doesn't use the len var, except for ICMP_ECHO_REQUEST, which are not allowed to be fragmented.
I eliminated it - and suffered no side effects.
- static uchar pkt_buff[IP_PKTSIZE] __attribute__((aligned(PKTALIGN)));
- static u16 first_hole, total_len;
- struct hole *payload, *thisfrag, *h, *newh;
- IP_t *localip = (IP_t *)pkt_buff;
- uchar *indata = (uchar *)ip;
- int offset8, start, len, done = 0;
- u16 ip_off = ntohs(ip->ip_off);
- /* payload starts after IP header, this fragment is in there */
- payload = (struct hole *)(pkt_buff + IP_HDR_SIZE_NO_UDP);
- offset8 = (ip_off & IP_OFFS);
- thisfrag = payload + offset8;
- start = offset8 * 8;
- len = ntohs(ip->ip_len) - IP_HDR_SIZE_NO_UDP;
- if (start + len > IP_PAYLOAD) /* fragment extends too far */
return NULL;
- if (!total_len || localip->ip_id != ip->ip_id) {
/* new (or different) packet, reset structs */
total_len = 0xffff;
payload[0].last_byte = ~0;
payload[0].next_hole = 0;
payload[0].prev_hole = 0;
first_hole = 0;
/* any IP header will work, copy the first we received */
memcpy(localip, ip, IP_HDR_SIZE_NO_UDP);
- }
I'm not sure the reset if we loose a packet, or get a bad one - start over is a great idea.
For some reason - why I'm ping flooding when tftping a large file (with large tftp block size) - things hang. If I set the block size to under the MTU - it works fine. Do you get the same?
I'm still poking to figure out why...
- /*
* What follows is the reassembly algorithm. We use the payload
* array as a linked list of hole descriptors, as each hole starts
* at a multiple of 8 bytes. However, last byte can be whaever value,
* so it is represented as byte count, not as 8-byte blocks.
*/
- h = payload + first_hole;
- while (h->last_byte < start) {
if (!h->next_hole) {
/* no hole that far away */
return NULL;
}
h = payload + h->next_hole;
- }
- if (offset8 + (len / 8) <= h - payload) {
/* no overlap with holes (dup fragment?) */
return NULL;
- }
- if (!(ip_off & IP_FLAGS_MFRAG)) {
/* no more fragmentss: truncate this (last) hole */
total_len = start + len;
h->last_byte = start + len;
- }
- /*
* There is some overlap: fix the hole list. This code doesn't
* deal with a fragment that overlaps with two different holes
* (thus being a superset of a previously-received fragment).
*/
- if ( (h >= thisfrag) && (h->last_byte <= start + len) ) {
/* complete overlap with hole: remove hole */
if (!h->prev_hole && !h->next_hole) {
/* last remaining hole */
done = 1;
} else if (!h->prev_hole) {
/* first hole */
first_hole = h->next_hole;
payload[h->next_hole].prev_hole = 0;
} else if (!h->next_hole) {
/* last hole */
payload[h->prev_hole].next_hole = 0;
} else {
/* in the middle of the list */
payload[h->next_hole].prev_hole = h->prev_hole;
payload[h->prev_hole].next_hole = h->next_hole;
}
- } else if (h->last_byte <= start + len) {
/* overlaps with final part of the hole: shorten this hole */
h->last_byte = start;
- } else if (h >= thisfrag) {
/* overlaps with initial part of the hole: move this hole */
newh = thisfrag + (len / 8);
*newh = *h;
h = newh;
if (h->next_hole)
payload[h->next_hole].prev_hole = (h - payload);
if (h->prev_hole)
payload[h->prev_hole].next_hole = (h - payload);
else
first_hole = (h - payload);
- } else {
/* fragment sits in the middle: split the hole */
newh = thisfrag + (len / 8);
*newh = *h;
h->last_byte = start;
h->next_hole = (newh - payload);
newh->prev_hole = (h - payload);
if (newh->next_hole)
payload[newh->next_hole].prev_hole = (newh - payload);
- }
- /* finally copy this fragment and possibly return whole packet */
- memcpy((uchar *)thisfrag, indata + IP_HDR_SIZE_NO_UDP, len);
- if (!done)
return NULL;
- localip->ip_len = htons(total_len);
- *lenp = total_len + IP_HDR_SIZE_NO_UDP;
- return localip;
+}
+static inline IP_t *NetDefragment(IP_t *ip, int *lenp) +{
- u16 ip_off = ntohs(ip->ip_off);
- if (!(ip_off & (IP_OFFS | IP_FLAGS_MFRAG)))
return ip; /* not a fragment */
- return __NetDefragment(ip, lenp);
+}
+#else /* !CONFIG_IP_DEFRAG */
+static inline IP_t *NetDefragment(IP_t *ip, int *lenp) +{
- return ip;
+} +#endif
This needs to have the same logic (ip_off & (IP_OFFS | IP_FLAGS_MFRAG)) as the above function. See comment below.
void NetReceive(volatile uchar * inpkt, int len) @@ -1363,10 +1521,12 @@ NetReceive(volatile uchar * inpkt, int len) #ifdef ET_DEBUG puts ("Got IP\n"); #endif
if (len < IP_HDR_SIZE) { debug ("len bad %d < %lu\n", len, (ulong)IP_HDR_SIZE); return; }/* Before we start poking the header, make sure it is there */
if (len < ntohs(ip->ip_len)) { printf("len bad %d < %d\n", len,/* Check the packet length */
ntohs(ip->ip_len)); return; @@ -1375,21 +1535,20 @@ NetReceive(volatile uchar * inpkt, int len) #ifdef ET_DEBUG printf("len=%d, v=%02x\n", len, ip->ip_hl_v & 0xff); #endif
if ((ip->ip_hl_v & 0xf0) != 0x40) { return; }/* Can't deal with anything except IPv4 */
/* Can't deal with fragments */
if (ip->ip_off & htons(IP_OFFS | IP_FLAGS_MFRAG)) {
return;
}
/* can't deal with headers > 20 bytes */
if ((ip->ip_hl_v & 0x0f) > 0x05) { return; }/* Can't deal with IP options (headers != 20 bytes) */
if (!NetCksumOk((uchar *)ip, IP_HDR_SIZE_NO_UDP / 2)) { puts ("checksum bad\n"); return; }/* Check the Checksum of the header */
tmp = NetReadIP(&ip->ip_dst); if (NetOurIP && tmp != NetOurIP && tmp != 0xFFFFFFFF) {/* If it is not for us, ignore it */
#ifdef CONFIG_MCAST_TFTP @@ -1397,6 +1556,9 @@ NetReceive(volatile uchar * inpkt, int len) #endif return; }
/* If we don't have a complete packet, drop it */
if (!(ip = NetDefragment(ip, &len)))
return;
This will break when you have CONFIG_IP_DEFRAG not set. (it just returns the ip, and does not throw away fragmented packets - which it should do)...
/* * watch for ICMP host redirects *

Thanks for your comments.
+#ifndef CONFIG_TFTP_MAXBLOCK +#define CONFIG_TFTP_MAXBLOCK 16384
It is more than tftp - nfs could also use the same.
Yes, I know. But most users are tftp ones. And if you want an even number (like 16k) as a tftp packet you need to add the headers and the sequence count. And I prefer to have the useful number in the config. So I used "TFTP" in the name in order for NFS users to know they must make some calculation.
How about CONFIG_NET_MAXDEFRAG instead?
We could have MAXPAYLOAD if we count in NFS overhead as well (I don't know how much it is, currently. Hope you see my point.
+static IP_t *__NetDefragment(IP_t *ip, int *lenp) +{
I don't understand the purpose of the lenp.
The calling function doesn't use the len var, except for ICMP_ECHO_REQUEST, which are not allowed to be fragmented.
I eliminated it - and suffered no side effects.
Well, since the caller has this "len" variable, I didn't want to leave it corrupted. But if it's actually unused after this point, we can well discard it.
- if (!total_len || localip->ip_id != ip->ip_id) {
/* new (or different) packet, reset structs */
total_len = 0xffff;
payload[0].last_byte = ~0;
payload[0].next_hole = 0;
payload[0].prev_hole = 0;
first_hole = 0;
/* any IP header will work, copy the first we received */
memcpy(localip, ip, IP_HDR_SIZE_NO_UDP);
- }
I'm not sure the reset if we loose a packet, or get a bad one - start over is a great idea.
Well, either we keep more than one in-reassembly packet (and storage begins to be a problem here) or not. I prefer not.
For some reason - why I'm ping flooding when tftping a large file (with large tftp block size) - things hang. If I set the block size to under the MTU - it works fine. Do you get the same?
Didn't try, and I can't do that today. I suspect either your ping is over-mtu, so each new fragment triggers the above code, or simply your ether+uboot can't keep up with the data rate.
As eaplained in the cover letter cover.1248943812.git.rubini@unipv.it some fragments can be lost in high traffic, as polling mode doesn't allow to enqueue packets. So I think you just loose some fragments, as target CPU time is eaten by the ping packets, and you don't get the complete reassembled packet any more.
I'm pretty sure it's like this.
On the other hand, I found a minor issue in this situation: - start a tftp transfer - ctrl-C it - start another
Server retransmissions for the first transfer go into the defrag engine e that reset-defrag-data code is triggered, so a packet may be lost, and I get a sporadic T in the receiving u-boot. I think it's not a real problem, though --- or, now that I rethink about it, it can be the same issue as above: my ether can't enqueue 8k of stuff so a fragment is lost in that case.
+#else /* !CONFIG_IP_DEFRAG */
+static inline IP_t *NetDefragment(IP_t *ip, int *lenp) +{
- return ip;
+} +#endif
This needs to have the same logic (ip_off & (IP_OFFS | IP_FLAGS_MFRAG)) as the above function. See comment below.
Yes, correct. Thanks.
/alessandro

On Fri 31 Jul 2009 03:46, Alessandro Rubini pondered:
For some reason - why I'm ping flooding when tftping a large file (with large tftp block size) - things hang. If I set the block size to under the MTU - it works fine. Do you get the same?
Didn't try, and I can't do that today. I suspect either your ping is over-mtu, so each new fragment triggers the above code,
No ping is the default length. The default packet size is 56 bytes, which translates into 98 bytes on the wire (8 bytes of ICMP header data, 20 for the IP header, and another 14 for the MAC addresses)
or simply your ether+uboot can't keep up with the data rate.
That doesn't explain, why does it work, when there is no fragmentation???
What it looks like is there is something wrong with the main network loop - the system is so busy responding to pings, that it never sends the TFTP Block ACK when it should.
As eaplained in the cover letter cover.1248943812.git.rubini@unipv.it some fragments can be lost in high traffic, as polling mode doesn't allow to enqueue packets. So I think you just loose some fragments, as target CPU time is eaten by the ping packets, and you don't get the complete reassembled packet any more.
I'm pretty sure it's like this.
I'm not convinced yet - but need to do some further poking on a different network...
On the other hand, I found a minor issue in this situation:
- start a tftp transfer
- ctrl-C it
- start another
Server retransmissions for the first transfer go into the defrag engine e that reset-defrag-data code is triggered, so a packet may be lost, and I get a sporadic T in the receiving u-boot. I think it's not a real problem, though --- or, now that I rethink about it, it can be the same issue as above: my ether can't enqueue 8k of stuff so a fragment is lost in that case.
What is missing in the reassembly code (that is described in RFC815) is the timer. (quote from the RFC): -------------- The final part of the algorithm is some sort of timer based mechanism which decrements the time to live field of each partially reassembled datagram, so that incomplete datagrams which have outlived their usefulness can be detected and deleted. --------------

or simply your ether+uboot can't keep up with the data rate.
That doesn't explain, why does it work, when there is no fragmentation???
Well, with no fragmentation there is less traffic. Each tftp packet is one patch, instead of a burst of packets (intermixed with pings).
Is the target replying to all pings? Or is it loosing some? If it looses say 30%, I expect one fragment in 3 to be lost as well. If your big-tftp is 4 fragments, 20% passes it loss is equally spread ((2/3)^4), but I fear much less as the burst saturates the incoming queue.
I'm pretty sure it's like this.
I'm not convinced yet - but need to do some further poking on a different network...
Thanks for these tests, mine is just guessing.
What is missing in the reassembly code (that is described in RFC815) is the timer. (quote from the RFC):
The final part of the algorithm is some sort of timer based mechanism which decrements the time to live field of each partially reassembled datagram, so that incomplete datagrams which have outlived their usefulness can be detected and deleted.
But I reassemble one packet only, so I don't need to timeout partly-filled packets to recover memory. A soon as I have a fragment for a new packet, the old packet is discarded (while unfragmented stuff flies intermixed).
/alessandro

On Fri 31 Jul 2009 08:16, Alessandro Rubini pondered:
or simply your ether+uboot can't keep up with the data rate.
That doesn't explain, why does it work, when there is no fragmentation???
Well, with no fragmentation there is less traffic. Each tftp packet is one patch, instead of a burst of packets (intermixed with pings).
Is the target replying to all pings?
Yes. And I can see the same in wireshark.
Or is it loosing some?
No.
If it looses say 30%, I expect one fragment in 3 to be lost as well. If your big-tftp is 4 fragments, 20% passes it loss is equally spread
((2/3)^4),
but I fear much less as the burst saturates the incoming queue.
I'm pretty sure it's like this.
I'm not convinced yet - but need to do some further poking on a different network...
Thanks for these tests, mine is just guessing.
No problem. I want to make sure it is robust as well.
What is missing in the reassembly code (that is described in RFC815) is the timer. (quote from the RFC):
The final part of the algorithm is some sort of timer based mechanism which decrements the time to live field of each partially reassembled datagram, so that incomplete datagrams which have outlived their usefulness can be detected and deleted.
But I reassemble one packet only, so I don't need to timeout partly-filled packets to recover memory.
But it is for the state that you described - the user cntr-C a current transfer, and the reassembly algorithm doesn't know to throw away a partially accepted packet, when things are cancelled...
A soon as I have a fragment for a new packet, the old packet is discarded (while unfragmented stuff flies intermixed).
Which works when you receive a complete fragment.
As you indicated - what really happens is it causes a timeout on the first packet. Up to you if you want to handle this or not...
All that should be needed is just to clear things in the start of NetLoop() (before eth_init) - there are a few things in there for CONFIG_NET_MULTI already, so I don't think that is a big deal... (That becomes your timer - a one shot at the very beginning of the transfer :)
-Robin

Is the target replying to all pings?
Yes. And I can see the same in wireshark.
Ah. I see. Strange...
What is missing in the reassembly code (that is described in RFC815) is the timer. (quote from the RFC):
The final part of the algorithm is some sort of timer based mechanism which decrements the time to live field of each partially reassembled datagram, so that incomplete datagrams which have outlived their usefulness can be detected and deleted.
But I reassemble one packet only, so I don't need to timeout partly-filled packets to recover memory.
But it is for the state that you described - the user cntr-C a current transfer, and the reassembly algorithm doesn't know to throw away a partially accepted packet, when things are cancelled...
No, it's not like that. The old instance of the TFTP server resends the last packet of the aborted xfer, while the new server sends the new packet. Both packets are new, they just come as intermixed fragments. And none survives as there is only one reassembly buffer. Or something like that, but both are fresh fragmented packets, no timeout would solve this sporadic problem.
It seems to me that if we want a secure defagment system (one that can be use to net-boot production systems in hostile networks), it's going to be too complex. It could work well, however, as a faster tool for the interactive developer.
/alessandro

On Fri 31 Jul 2009 10:02, Alessandro Rubini pondered:
Is the target replying to all pings?
Yes. And I can see the same in wireshark.
Ah. I see. Strange...
What is missing in the reassembly code (that is described in RFC815) is the timer. (quote from the RFC):
The final part of the algorithm is some sort of timer based mechanism which decrements the time to live field of each partially reassembled datagram, so that incomplete datagrams which have outlived their usefulness can be detected and deleted.
But I reassemble one packet only, so I don't need to timeout partly-filled packets to recover memory.
But it is for the state that you described - the user cntr-C a current transfer, and the reassembly algorithm doesn't know to throw away a partially accepted packet, when things are cancelled...
No, it's not like that.
Are you sure? That is how it is on my network/tftp server.
Maybe we are talking about different things.
What I'm talking about is: - start a tftp file transfer. - CNTR-C it, causing a partial reassembly to done. - start a new transfer.
All I did was add this to the start of NetLoop() to ensure that things are OK.
#ifdef CONFIG_IP_DEFRAG memset(pkt_buff, 0, IP_HDR_SIZE_NO_UDP); #endif
The old instance of the TFTP server resends the last packet of the aborted xfer,
I don't see how this can happen. the tftp server gets a request for a block of 2048 bytes (for example) - and whacks out all 2048 bytes at once, and waits for the ACK.
If the ACK never comes - it doesn't send the packet again. The client times out, and the client needs to send another ACK of the last block before it.
What tftp server are you using?
imhotep:/home/rgetz # /usr/sbin/in.tftpd -V tftp-hpa 0.43, with remap, with tcpwrappers
while the new server sends the new packet. Both packets are new, they just come as intermixed fragments. And none survives as there is only one reassembly buffer.
The only way this should happen - is a real attack - sending malformed packets to the U-Boot system while it is downloading things on the net - which I agree with your comments below - is outside the scope of what we are talking about.
As a minimal test - since we are only talking about tftp and nfs (so far) - if we did not attempt to assemble packets which are not coming from the serverip, that might be an OK thing to do...
if (NetReadIP(&ip->ip_src) != NetServerIP)
Or something like that, but both are fresh fragmented packets, no timeout would solve this sporadic problem.
I still don't understand the use case. The server should only be sending packets in response to an ACK. The packets should be the entire UDP block size. the server should never mix packets.
It seems to me that if we want a secure defagment system (one that can be use to net-boot production systems in hostile networks), it's going to be too complex. It could work well, however, as a faster tool for the interactive developer.
I don't want something that is secure - security is beyond tftp's ability. For that -- you need to boot an OS and use https or sftp. All I'm asking for is something robust... :)
-Robin

What I'm talking about is:
- start a tftp file transfer.
- CNTR-C it, causing a partial reassembly to done.
- start a new transfer.
Yes, exactly the case I observed. But you are not guaranteed a partial reassembly happens, as you should be ctrl-c-ing at the exact right time.
All I did was add this to the start of NetLoop() to ensure that things are OK.
#ifdef CONFIG_IP_DEFRAG memset(pkt_buff, 0, IP_HDR_SIZE_NO_UDP); #endif
And the behaviour changed? It's strange, as the new fragments will most likely (but 1/64k) have a different ID.
The old instance of the TFTP server resends the last packet of the aborted xfer,
It sounded strange to me as well, but with atftpd it is what happens. Earlier I had plain-old tftpd, which didn't even support blksize option.
Here it is, from PC to PC:
17:28:49.166859 IP morgana.45177 > rudo.32854: UDP, length 516 17:28:49.167030 IP rudo.32854 > morgana.45177: UDP, length 4 17:28:49.167338 IP morgana.45177 > rudo.32854: UDP, length 516 17:28:54.173411 IP morgana.45177 > rudo.32854: UDP, length 516 17:28:59.178620 IP morgana.45177 > rudo.32854: UDP, length 516 17:29:04.183788 IP morgana.45177 > rudo.32854: UDP, length 516 17:29:09.188972 IP morgana.45177 > rudo.32854: UDP, length 516 17:29:14.194160 IP morgana.45177 > rudo.32854: UDP, length 516
atftpd repeats after 5, 10, 15, 20, 25 seconds, and nothing more.
I installed the hpa version just for trying (but I'm getting back to atftpd, as i don't like the grub-like restriction "only absolute pathnames").
It behaves similarly:
17:40:42.404539 IP morgana.54278 > rudo.32854: UDP, length 516 17:40:42.404717 IP rudo.32854 > morgana.54278: UDP, length 4 17:40:42.404972 IP morgana.54278 > rudo.32854: UDP, length 516 17:40:43.407340 IP morgana.54278 > rudo.32854: UDP, length 516 17:40:45.409766 IP morgana.54278 > rudo.32854: UDP, length 516 17:40:49.414101 IP morgana.54278 > rudo.32854: UDP, length 516 17:40:57.422814 IP morgana.54278 > rudo.32854: UDP, length 516 17:41:13.439200 IP morgana.54278 > rudo.32854: UDP, length 516
Here it's 1, 3, 7, 16, 31 seconds.
As a minimal test - since we are only talking about tftp and nfs (so far) - if we did not attempt to assemble packets which are not coming from the serverip, that might be an OK thing to do...
if (NetReadIP(&ip->ip_src) != NetServerIP)
Yes, I agree.
All I'm asking for is something robust... :)
Great, thanks /alessandro

On Fri 31 Jul 2009 03:46, Alessandro Rubini pondered:
For some reason - why I'm ping flooding when tftping a large file (with large tftp block size) - things hang. If I set the block size to under the MTU - it works fine. Do you get the same?
Didn't try, and I can't do that today. I suspect either your ping is over-mtu, so each new fragment triggers the above code, or simply your ether+uboot can't keep up with the data rate.
I tried on a different network (tftp a 18M file) - and it worked without issues while ping flooding...
Until I filled up the max number of packets on the target (and it did start loosing things).
"sudo ping -l 3 -f targetip" worked fine while transferring the file...
"sudo ping -l 4 -f targetip" made things fall over - but this is a function of the ethernet driver, not anything else. (We pre-allocate 4 packets worth of info, and and only allow 4 packets to stack up). Even when there is no fragmentation - this causes it to fail....
So, I'll have to see what is going on with my network at home - so it could just be the network couldn't keep up...

On Fri 31 Jul 2009 03:46, Alessandro Rubini pondered:
Thanks for your comments.
+#ifndef CONFIG_TFTP_MAXBLOCK +#define CONFIG_TFTP_MAXBLOCK 16384
It is more than tftp - nfs could also use the same.
Yes, I know. But most users are tftp ones. And if you want an even number (like 16k) as a tftp packet you need to add the headers and the sequence count. And I prefer to have the useful number in the config. So I used "TFTP" in the name in order for NFS users to know they must make some calculation.
How about CONFIG_NET_MAXDEFRAG instead?
We could have MAXPAYLOAD if we count in NFS overhead as well (I don't know how much it is, currently. Hope you see my point.
Not really.
IMHO - The protocol max payload should be taken care of on the protocol side, not the common network side.
It then becomes:
#define TFTP_MTU_BLOCKSIZE (CONFIG_NET_MAXDEFRAG - TFTP_OVERHEAD)
#define NFS_READ_SIZE (CONFIG_NET_MAXDEFRAG - NFS_OVERHEAD)
or something like that (since NFS likes to be power of two, 1024, 2048, etc - it would need to be tweaked a little), but you get the idea...
+static IP_t *__NetDefragment(IP_t *ip, int *lenp) +{
I don't understand the purpose of the lenp.
The calling function doesn't use the len var, except for ICMP_ECHO_REQUEST, which are not allowed to be fragmented.
I eliminated it - and suffered no side effects.
Well, since the caller has this "len" variable, I didn't want to leave it corrupted. But if it's actually unused after this point, we can well discard it.
OK.
+#else /* !CONFIG_IP_DEFRAG */
+static inline IP_t *NetDefragment(IP_t *ip, int *lenp) +{
- return ip;
+} +#endif
This needs to have the same logic (ip_off & (IP_OFFS | IP_FLAGS_MFRAG)) as the above function. See comment below.
Yes, correct. Thanks.
Were you going to send an update for Ben?

Alessandro Rubini wrote:
Were you going to send an update for Ben?
Yes, but I was waiting for your confirmation. Will do tomorrow.
/alessandro _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Great! Looking forward to it. Sorry for my silence on this but my testbed is currently in a moving box so I don't have much to contribute... Big thanks to Robin for all the due diligence.
regards, Ben
participants (3)
-
Alessandro Rubini
-
Ben Warren
-
Robin Getz