Notes from the October NANOG meeting

Stan Barber sob at academ.com
Wed Oct 26 03:22:24 UTC 1994


Here are my notes from the recent NANOG meeting. Please note that any
mistakes are mine. Corrections, providing missing information, or futher
exposition of
any of the information here will be gratefully accepted and added to this
document which will be available via anonymous ftp later this month.

----------------------------------------------------------------------------
NANOG
Notes by Stan Barber <sob at academ.com>
[Please note that any errors are mine, and I'd appreciate corrections being
forwarded to me.]

Elise Gerich opened the meeting with Merit's current understanding of the
state of the transition. THENET, CERFNET and MICHNET have expressed
specific dates for transition.

John Scudder then discussed some modelling he and Sue Hares have done on
the projected load at the NAPs. The basic conclusions are that the FDDI
technology (at Sprint) will be saturated sometime next year and that
load-balancing strategies among NSPs across the NAPS is imperative for the
long term viability of the new architecture. John also expressed concern
over the lack of expressed policy for the collection of statistical data by
the NAP operators. All of the NAP operator are present and stated that they
will collect data, but that there are serious and open questions concerning
the privacy of that data and how to publish it appropriately. John said
that collecting the data was most important. Without the data, there is no
source information from which publication become possible. He said that
MERIT/NSFNET had already tackled these issues. Maybe the NAP operators can
use this previous work as a model to develop their own policies for
publication.

After the break, Paul Vixie discussed the current status of the DNS and
BIND.  Specifically, he discusses DNS security. There are two reasons why
DNS are not secure. There are two papers on this topic and they are both in
the current BIND kit.  So the information is freely available.

Consider the case of telnetting across the Internet and getting what
appears to be your machine's login banner. Doing a double check
(host->address, then address->host) will help eliminate this problem.
hosts.equiv and .rhosts are also sources of problems. Polluting the cache
is a real problem. Doing UDP flooding is another problem. CERT says that
doing rlogin is bad, but that does not solve the cache pollution problem.

How to defend?

1. Validate the packets returned in a response to the query. Routers should
drop UDP packets on which the source address don't match what it should be.
(e.g. a udp packet comes in on a WAN link that should have come in via an
ethernet interface).

2. There are a number of static validations of packet format that can be
done. Adding some kind of cryptographic information to the DNS would help.
Unfortunately, this moves very slowly because there are a number of strong
conflicting opinions.

What is being done?

The current BETA of  BIND has almost everything fixed that can be fixed
without a new protocol.  Versions prior 4.9 are no longer supported.

Paul is funded half-time by the Internet Software Consortium. Rick Adams
funds it via UUNET's non-profit side.  Rick did not want to put it under
GNU.

DNS version 2 is being discussed. This is due to the limit in the size of
the udp packet.  Paul M. and Paul V. are working to say something about
this at the next IETF.

HP, Sun, DEC and SGI are working with Paul to adopt the 4.9.3 BIND once it
is productional.

After this comes out, Paul will start working on other problems. One
problem is the size of BIND in core. This change will include using the
Berkeley db routing to feed this from a disk-based database.

There will also be some effort for helping doing load-balancing better.

What about service issues? Providing name service is a start.

DEC and SGI will be shipping BIND 4.9.3 will be shipping it with the next
release.

Paul has talked to Novell, but noone else....Novell has not been a helpful
from the non-Unix side.


RA Project : Merit and ISI with a subcontract with IBM

ISI does the Route Server Development and the RA Futures
Merit does the Routing Registry Databases and Network Management

The Global Routing Registry consists of the RADB, various private routing
registries, RIPE and APNIC. The RADB will be used to generate route server
configurations and potentially router configurations.

1993 -- RIPE 81
1994 -- PRIDE tools
April 1994 -- Merit Routing Registry
September 1994 -- RIPE-181
October 1994 -- RIPE-181 Software implementation
November 1994 -- NSP Policy Registrations/Route Server Configurations

Why use the RADB? Troubleshooting, Connectivity, Stability

The Route Server by ISI with IBM

They facilitate routeing information exchange. They don't forward packets.
There are two at each NAP with one AS number. They provide routing
selection and distribution on behalf of clients (NSPs). [Replication of
gated single table use = view] Multiple views to support clients with
dissimilar route selection and/or distribution policies. BGP4 and BGP4 MIB
are supported. RS's AS inserted in AS path, MED is passed unmodified (this
appears controversal).

The Route Servers are up and running on a testbed and have been tested with
up to 8 peers and 5 views. Target ship date to 3 NAPS is October 21. The
fourth will soon follow.

The Network Management aspect of the RA project uses a Hierarchically
Distributed Network Management Model. At the NAP, only local NM Traffic,
externalizes NAP Problems, SNMPv1 and SNMPv2 are supported. OOB Access
provides seamless PPP backup & console port access. Remote debugging
enviromnent is identical to local debugging environment.
The Centralizes Network Management System at Merit polls distributed rovers
for problems, consolidates the problems into ROC alert screen. It is
operational on August 1st which is operated by the University of Michigan
Network Systems at the same location as the previous NSFNET NOC. 24/7 human
operator coverage.

Everything should be operational by the end of November.

Routing Futures -- Route Server decoupling packet forwarding from routing
inforation exchange, scalability and modularity. For example, explicit
routing will be supported (with the development of ERP). IPv6 will be
provided. Doing analysis of RRDB and define a general policy language
(backward compatible with RIPE 181). Routing policy consistany and
aggregration will be developed.

Securing the route servers -- All of the usual standard mechanisms are
being applied. Single-use passwords.... mac-layer bridges .... etc....How
do we keep the routes from getting screwed intentionally? Denial of service
attacks are possible.

A design document on the route server will be available via the
RRDB.MERIT.EDU WWW server.

There is a serious concern to sychronization of the route servers and the
routing registries. No solution has been implemented currently. Merit
believes that will do updates at least once a day.

Conversion from PRDB to RRDB

The PRDB is AS 690 specific, NCARs, twice weekly and AUP constrained.

The RADB has none of these features.

Migration will occur before April of 1995. The PRDB will be temporarily
part of the Global Routing Registry during transition.

Real soon now -- Still send NCAR and it will be entered into PRDB and RRDB.
Constancy checking will be more automated. Output for AS 690 will be
compared from both to check consistancy. While this is happening, users
will do what they always have. [Check ftp.ra.net for more information.]

There is alot of concern among the NANOG participants about the correctness
of all the information in the PRDB. Specifically, there appears to be some
inaccuracy (homeas) of the information. ESnet has a special concern about
this.

[dsj at merit.edu to fix the missing homeas problem]

Transition Plan:
1. Continue submitting NACRs
2. Start learning RIPE 181
3. Set/Confirm your AS's Maintainer object for future security
4. Switch to using Route Templates (in December)


When it all works --RADB will be source for AS690 configuration, NCARs will
go away, use local registries

RADB to generate AS690 on second week of December.
NACRs to die at the end of that week.

Proxy Aggregation -- CIDR by Yakov Rekhter

Assumptions -- Need to match the volume of routing information with the
available resources, while providing connectivity server -- on a per
provider basis. Need to match the amount of resource with the utility of
routing information -- on a per provider basis.

But what abaout "MORE THRUST?" It's not a good answer. Drives the costs up,
doesn't help with complexity of operations, eliminates small providers

Proxy aggregation -- A mechanism to allow aggregation of routing
information originated by sites that are BGP-4 incapable.

Proxy aggregation -- problems -- full consensus must exist for it to work.

Local aggregation -- to reconnect the entity that benefits from the
aggregation and the party that creates the aggregation. Bilateral
agreements would control the disposition of doing local aggregation.

Potential Candidates for Local Aggregation -- Longer prefix in presence of
a shorter prefix, Adjacent CIDR Blocks, Aggregation over known holes.

Routing in the presens of Local Aggregation --
        AS and router that did the aggregation is identified via BGP
(AGGREGATOR attribute)
        Should register in RRDB
Summary -- adding more memory to routers is not an answer
Regionals should aggregate their own CIDR blocks
An NSP may do local aggregation and register it in the RRDB.

Optimal routing and large scale routing are mutually exclusive.
CIDR is the only known technique to provide scalable routing in the Internet.
Large Internet and the ability of every site to control its own routing are
mutually exclusive.

Sprint Network Reengineering

T-3 Network with sites in DC, Atlanta, Ft.Worth and Stockton currently.
Will be expanding to Seattle, Chicago and Sprint NAP in the next several
months. ICM uses this network for transit from one coast to the other. They
expect to create a seperate ICM transit network early next year.

Next NANOG will be at NCAR in February.

PacBell NAP Status--Frank Liu

The Switch is a Newbridge 36-150.

NSFNET/ANS connected via Hayward today.
MCINET via Hayward today.
PB Labs via Concord today.

Sprintlink connected via  SanJose (not yet).

NETCOM connected via Santa Clara in the next Month.

APEX Global Information Services (based in Chicago) will connect via Santa
Clara, but not yet.

The Packet Clearing House (consortium) for small providers connected via
Frame Relay to PB NAP. They will connect via one router to the NAP. It is
being led by Electric City's Chris Allen.

CIX connections are also in the cloud, but not in the same community yet.

Testing done by Bellcore and PB.
[TTCP was used for testing. The data was put up and removed quickly, so I
did lose some in taking notes.]
One source (TAXI/Sonet)  -> One sink
Two Sources (TAXI/Sonet) -> One Sink

Five Sources (ethernet connected) ->One Sink (ethernet connected)

Equipment issues -- DSU HSSI Clock mistmatch with the data rate. Sink
devices does not have enough processing power to deal with large numbers of
512 byte packets.

One Source-> One Sink

MSS         Window            Througput (out of 40Mb/sec)
4470            51000                   33.6
4470            25000                   22.33


Two Source -> One Sink

4470            18000                   33.17   (.05% cell loss, .04%
packet restrans)
1500            51000                   15.41   (.69% cell loss, 2.76%
packet restrans)


Conclusions

Maximum througput is 33.6 Mbps for the 1:1 connection.

Maximum througput will be higher when the DSU HSSI clock and data-rate
mistmatch is corrected.

Cell loss rate is low (.02% -- .69%).

Througput degraded with the TCP window size is greater than 13000 bytes.

Large switch buffers and router traffic shaping are needed.

[The results appear to show TCP backing-off strategy engaging.]

Future Service Plan of the SF-NAP-- Chin Yuan

Currently, the NAP does best effort with RFC 1490 encapsulation.

March 1995 -- Variable Bit Rate, Sub-Rate Tariff (4,10,16,25,34 and 40Mbps
on 51, 100 and 140Mbps on OC3c). At CPE: Static Traffic Shaping and RFC
1483 and 1577 support [Traffic Shaping to be supported by Cisco later this
year in API card for both OC3c and T3.]

June 1995 -- Support for DS1 ATM (DXI and UNI at 128, 384 kbps and 1.4Mbps)

1996 or later -- Available Bit Rate and SVCs. At CPE: Dynamic Traffic Shaping

Notes on Variable Bit Rate:
Sustainable Cell Rate(SCR) and Maximum Burst Size (MBS)---
          * Traffic Policing
          * Aggregated SCR is no greater than the line rate
          * MBS = 32, 100, 200 cells (Negotiable if > 200 cells)
Peak Cell Rate (possible)
          * PCR <=line rate

Traffic shaping will be required for the more advanced services. Available
Bit Rate will require feedback to the router.


ANS on performance --- Curtis Vallamizar
There are two problems: aggregation of lower-speed TCP flows, support for
high speed elastic supercomputer application.

RFC 1191 is very important as is RFC-1323 for these problems to be addressed.

The work that was done -- previous work showed that top speed for TCP was 30Mbs.

The new work -- TCP Single Flow, TCP Multiple Flow

Environment -- two different DS3 paths  (NY->MICH: 20msec; NY->TEXAS->MICH:
68msec), four different versions of the RS6000 router software and Indy/SCs

Conditions -- Two backround conditions (no backround traffic, reverse TCP
flow intended to achive 70-80% utilization)
Differing numbers of TCP flows.

Results are available on-line via http.  Temporarily it is located at:

http://tweedledee.ans.net:8001:/

It will be on line rrdb.merit.edu more permanently.

ATM -- What Tim Salo wants from ATM....
[I ran out of alertness, so I apologize to Tim for having extremely sketchy
notes on this talk.]

MAGIC -- Gigabit TestBed

Currently  Local Area ATM switches over SONET. Mostly FORE switches.

Lan encapsuation (ATM Forum) versus RFC 1537

Stan Barber                                                     sob at academ.com







More information about the NANOG mailing list