akamai yesterday - what in the world was that

Denys Fedoryshchenko nuclearcat at nuclearcat.com
Wed Feb 12 17:24:22 UTC 2020


> It would be really nice if the major CDNs had virtual machines small
> network operators with very expensive regional transport costs could
> spin up.  Hit rate would be very low, of course, but the ability to
> grab some of these mass-market huge updates and serve them on the
> other end of the regional transport at essentially no extra cost would
> be great. I'm sure legal arrangements make that difficult, though.
+1

I think primary reason is that many major CDN offload nodes implemented 
such way that they require significant amount of maintenance and 
support. And doesnt matter, small or big ISP - they will have problems, 
and when the company that installed this CDN node is huge, like Facebook 
or Google, to crank all the bureaucratic wheels to change silly power 
supply or HDD - it comes at a huge cost for them. Also add that small 
ISP often dont have 24/7 support shifts, less qualified for complex 
issues, more likely to have poor infrastructure 
(temperature/power/reliability), that means more support expenses.
And they don’t give a damn that because of their "behemothness", they 
increase the digital inequality gap. When a large ISP or ISP cartel 
member enter some regional market, local providers will not be able to 
compete with him, since they cannot afford CDN nodes due traffic volume.

Many of CDN also do questionable "BGP as signalling only" setups with 
proprietary TCP probing/loss, that often doesn't work reliably. Each of 
them is trying to reinvent the wheel, "this time not round, but 
dodecahedral". And when it fails, ISP will waste time of support, until 
it reach someone who understand issue. In most cases, this is a blackbox 
setup, and when problem happens ISP are endlessly trying to explain 
problem to outsourced support, who have very limited access as well, and 
responding like robot according to the his "support workflow", with zero 
feedback to common problems.

Honestly, it's time to develop an open standard for caching content on 
open CDN nodes, which should be easy to use for both content providers 
and ISPs.
For example, at one time existed a special hardcoded "retracker.local" 
server in many torrent clients, which optionally(if resolved on ISP, 
static entry in recursor) was used for the discovery of nearest seeders 
inside network of a local provider.
http://retracker.local/
Maybe it is possible to make a similar scheme, if the content provider 
wants "open" CDN to work, it will set some alternative scheme 
cdn://content.provider.com/path/file or other kind of hint, with content 
validity/authenticity mechanism. After that, the browser will attempt to 
do CDN discovery, for example: "content.provider.com.reservedtld" and 
will push request through it.
I'm sure someone will have a better idea how to do that.

As a result, the installation of such "offloading node" will be just 
installing container/vm and, if the load is increased, increasing the 
number of servers/vm instances.




More information about the NANOG mailing list