I'm implementing its features in my own way, and I've figured out that multiplying the .nextthink delta for 250 instead of 255 gives perfect accuracy for U_LERPFINISH in most cases, because (1/250 == 0.004), while (1/255 = 0.0039215686275).
(0.1*250 == 25), which can be fully stored in a byte and losslessly decoded as (25/250 == 0.1), while (0.1*255 == 25.5), which gets cut to 25 and then lossy decoded as (25/255 == 0.0980392156863).
As .nextthink deltas most often occurs in multiples of 0.1, I suggest for engines who adopted the FitzQuake protocol to do this change.
The maximum possible difference between these is (250/255 == 0.9803921568627) and (255/250 == 1.02), which can probably be rounded without very noticeable artifacts.
I've tried to post this at func_, but got an "error 42".