Vulnerbilities of Interconnection
William Waites
ww at styx.org
Fri Sep 6 17:01:58 UTC 2002
>>> "Jane" == Pawlukiewicz Jane <pawlukiewicz_jane at bah.com> writes:
>> Even if we were to model it, the best data we could get for
>> the "Internet" would be BGP routing tables. These are also
>> subjectve views of the rest of the net. We could take a full
>> table, map all the ASN adjacencies, and then pick arbtrary
>> ASN's to "fail", then see who is still connected, but we are
>> still dealing with connectivity relatve to us and our peers,
>> even 5+ AS-hops away.
Jane> I want to make sure I understand this. As I understand it,
Jane> this would work regarding routing only. It would be a model
Jane> that would have a result of ones and zeros, so to speak,
Jane> meaning either you're connected or you're not. What this
Jane> doesn't take into consideration, I believe, is the effects
Jane> of congestion regarding increased traffic due to news
Jane> traffic and rerouting that takes place whenever there is a
Jane> loss of a site.
I believe you are correct. Modelling the connectivity matrix [1] is
good as a first approximation. The next thing to do would be to
estimate the transition probabilities between ASi and ASj (you could
do this by looking at the adjacencies one step out [2], for
example. There are other methods of estimating the transition
probabilities but most are foiled by a lack of available data. You can
get a pretty good adjacency map doing table dumps from all of the
route servers, looking glasses, etc.)
Once you have the TP matrix, construct a vector of initial conditions
to represent likely traffic sources -- i.e. the ASs containing CNN and
the BBC, for example -- and look at how the traffic dissipates through
an n-step Markov process [3].
This will tell you something about how heavily loaded with traffic
certain ASs (the accumulation points) become, at least as a ratio to
"normal", but since we have no information about channel capacity
available within each AS, it doesn't say much about actual
congestion. It will, however, suggest where congestion is likely to
occur if links have not been overprovisioned by some ratio. I think ;)
The trick is in estimating the transition probabilities. I'm not sure
this is a good method. Using adjacencies from one hop out assumes
transit to two hops out. Using n hops out implies transit to n+1 hops
and the bigger n gets the less accurately it will start representing
the real mesh since it starts implicitly assuming symmetric transit
everywhere once n is greater than, say, 4 or 5 or whatever the average
AS path length is these days.
-w
[1] C_ij = 1 if i and j connected or i == j, 0 otherwise
[2] If A_i is the number of adjacencies for ASi, then set transition
probabilities P_ij proportional to (C_ij * A_j) / Sum_k (C_jk * A_k),
normalized so that Sum_j P_ij = 1.
[3] n-th step transition probabilities are (P_ij)^n
--
William Waites <ww at styx.org> +1 416-717-2661
finger ww at styx.org for PGP keys
Idiosyntactix Research Laboratories
http://www.irl.styx.org -- NetBSD: ça marche!
More information about the NANOG
mailing list