by Spike » Thu Jan 22, 2015 6:41 pm
the larger a packet, the more likely you'll get interferance corrupting it, resulting in loss.
one way around this is to just repeat the packet multiple times. of course, this will increase packetloss on wired links.
both FTE and DP have network protocols that resend current state on loss. the server needs a journal of what was sent in each packet and the protocol needs a way to track loss somehow (a gap in ack sequences or something), but the reduction in data can be significant while large scenes can send entity data round-robin or so (as the previous frame is not instantly bad). on the other hand, the quakeworld protocol will keep resending data until its acked, and may have the best performance on a lan, but as it deltas from the last-known-received state, both ends need quite extensive state logs.
the ethernet packet limits remain as de-fragmentation can be used for ddos attacks.
in theory, routers are not meant to need to defrag packets, only the final receiver does. however, any port numbers are present only in the first fragment, and thus NAT boxes *MUST* have some way of tracking that, and the cheesy way to handle that breaks if you have packets coming from multiple sources/routes.
With wide-spread carrier-grade NAT on the horizon (combined with home/private NAT boxes), expect the worst.
minimum ipv4 fragment size is something around 576(-headers) bytes. the standard permits silently dropping any fragments smaller than that regardless of how they arrived, and fully blaming the sender for it never arriving. hurrah. any link that requires fragmentation smaller than that *MUST* support defragmenting before sending over ethernet or whatever (typically such extra fragmentation happens at the hardware/driver layer rather than the router's kernel).
if an application generates packets larger than this limit, there is a chance that such packets will be fragmented and a chance that the receiver/NAT will drop all fragments.
ipv6 has a higher minimum, but I don't remember it.
.