Impacts of Encryption Everywhere (any solution?)

Matt Erculiani merculiani at gmail.com
Mon May 28 16:50:57 UTC 2018


In addition to the "bump in the wire" you could also enable larger frame
sizes downstream since you're already completely disassembling and
reassembling the packets. Large downloads or uploads could see overhead go
from 3% at 1500B to about 0.5% at 9100B. It's not much but every little bit
counts. (Preamble, Ethernet, IP, and TCP headers all need be sent accross
the circuit less often to get the same amount of data through)

Looking only at the throughput of L4 payloads, you get:
1500 MTU = 956 kbps
9100 MTU = 992 kbps

That almost adds a whole additional home if my math is correct.

-Matt


On Mon, May 28, 2018, 11:17 Grant Taylor via NANOG <nanog at nanog.org> wrote:

> On 05/28/2018 08:23 AM, Mike Hammett wrote:
> > To circle back to being somewhat on-topic, what mechanisms are available
> > to maximize the amount of traffic someone in this situation could
> > cache? The performance of third-world Internet depends on you.
>
> I've personally played with Squid's SSL-bump-in-the-wire mode (on my
> personal systems) and was moderately happy with it.  -  I think that
> such is a realistic possibility in the scenario that you describe.
>
> I would REQUIRE /open/ and /transparent/ communications from the ISP and
> a *VERY* strict security control to the caching proxy.  I would naively
> like to believe that an ISP could establish a reputation with the
> community and build a trust relationship such that the community was
> somewhat okay with the SSL-bump-in-the-wire.
>
> It might even be worth leveraging WPAD or PAC to route specific URLs
> direct to some places (banks, etc) to mitigate some of the security risk.
>
> I would also advocate another proxy on the upstream side of the 1 Mbps
> connection (in the cloud if you will) primarily for the purpose of it
> doing as much traffic optimization as possible.  Have it fetch things
> and deal with fragments so that it can homogenize the traffic before
> it's sent across the across the slow link.  I'd think seriously about
> throwing some CPU (a single core off of any machine in the last 10 years
> should be sufficient) at compression to try to stretch the bandwidth
> between the two proxy servers.
>
> I'd also think seriously about a local root DNS zone slave downstream,
> and any other zone that I could slave, for the purpose of minimizing the
> number of queries that need to get pushed across the link.
>
> I've been assuming that this 1 Mbps link is terrestrial.  Which means
> that I'd also explore something like a satellite link with more
> bandwidth.  Sure the latency on it will be higher, but that can be
> worked with.  Particularly if you can use some intelligence to route
> different CoS / ToS / DiffServ (DSCP) across the different links.
>
> I think there are options and things that can be done to make this viable.
>
> Also, considering that the village has been using a 40 kbps link,
> sharing a 1 Mbps (or 1,000 kbps) link is going to be a LOT better than
> it was.  The question is, how do you stretch a good thing as far as
> possible.
>
> Finally, will you please provide some pointers to the discussion you're
> talking about?  I'd like to read it if possible.
>
>
>
> --
> Grant. . . .
> unix || die
>



More information about the NANOG mailing list