Summary of Boulder Regional-Techs Meeting

mak mak
Wed Mar 10 17:14:40 UTC 1993

Summary of NSFNET Regional-Techs Meeting, January 21-22, 1993
in Boulder, Colorado

Merit sponsored a meeting of the NSFNET Regional-Techs in Boulder,
Colorado during January 21-22, 1993. The meeting was generously
hosted by Roy Perry of US West. FARNET met simultaneously in Denver.
The attendance list for the meeting is appended. Most of the regionals
and midlevels were represented, along with commercial network service 
providers, router vendors, government agency network providers
and NSF.

The purpose of this meeting was to allow the regional-techs to get
together and have a focused discussion about networking plans. The
Internet is evolving and direct action by the network operators and
router vendors needs to take place in the near term (6-8 months) in
order to provide a network architecture that allows for expected growth
rates.  Last November, The Internet Engineering Task Force discussions
indicated that the NSFNET Backbone Service and regional networks move
quickly to implementing an architecture called "Classless Inter-Domain
Routing" (CIDR).  This architecture suggests changes to the nature of
the routing protocols and interactions between routing domains (eg.
each regional is its own routing doman).  The changes include moving to
a new version of the Border Gateway Protocol which supports the
grouping or aggregation of routing information when it is stored in
routers and conveyed as protocol information between routing domains.

The Merit staff nominated a person to lead each session covering the
agenda items listed below, and "packed" the audience with friendly
experts on the respective topics.

Topics Covered in the Meeting

1.) GIX, NAPs, Route Servers 
    Elise Gerich (Merit), discussion leader

The NAP concept is not totally new.  Any time you have multiple
organizations homed on a shared medium, it might be considered a NAP.
As more and more organizations interconnect it has become more
necessary to better plan these interconnections and therefore, several
pilots have emerged to test the functions needed at a NAP. Alternet,
PSI, and Sprint initiated the MAE-East experiment which could be
considered a distributed NAP.  RIPE and Merit have both experimented
with the concept of a route server.

The purpose of the session was to discover what the regional technical
representatives felt a route server should do and how it would
compliment the system.

Notes from Elise's talk:


What is a route server?
How does it play in the NAP?
How to test if it works?

	Maximal Connectivity
	Stable, Consistent Routing
	Manageable Routing

	immature NAPs	eg. cornell implementation (not planned)
	adolesent NAPs	eg. fix-east (planned)
	coming of age of NAPs	- mae-east

RS design goals

	2 proposed models:
		1. single RS with world wide routing information
		2. multiple RSs each with Regional (continental) routing

	Ripe is doing #2, MERIT is working on #1.  General feeling was
	that a merged form might be best, use the DNS model.
	The RS is actually two part, a policy engine and a route engine.

	looks & smells like a NAP, doesn't taste like one.
	has a worldwide flavor

	scope of policy - management visability
	move lvl2 to national cloud
	need route&policy filters
	where is the policy description language?
	need to distribute for disaster recovery.
	timing concerns.  
	how to communicate changes to remote RS

2.) Implementation of CIDR and Supernetting
    Vince Fuller (BARRnet), discussion leader

CIDR is billed as a short term solution, and to get started there are
immediate operational actions needed. To quote V. Fuller, J. Yu, T.  Li
and K. Varadhan in RFC 1338, what is needed are "strategies for address
assignment of the existing IP address space with a view to conserve the
address space and stem the explosive growth of routing tables in
default-route-free routers run by transit routing domain providers."
The discussion included: how to phase in route aggregation, how to
configure routing policy for aggregation, how aggregation will be used,
test plans for the BGP-4 protocol, and the requirement for network
renumbering. Jeff Honig of the Cornell "gated" group led a discussion
about changes to the gateway daemon in support of CIDR. This was
important since the ANS backbone will be using the gated program
for its routing support.

Notable results of discussion at this session included agreements
that the regionals should cooperate to implement CIDR in multiple
phases, with BGP-4 and route aggregation to be targeted for June, 1993.
An Internet Draft should be written on deployment of CIDR in
the NSFNET backbone and the regional networks. Configuration
options will allow the backbone to accept route aggregates from
regional networks, and then announce those aggregates to other 
regionals and midlevels.

Vince's slides:

Thoughts on CIDR in the NSFNET Regionals

     Address assignments in preparation for CIDR
          Obtaining a block of class-Cs from the NIC
          Assignment within the regional
          Multiply-connected regionals

     Backbone/regional responsibilities for aggregation policy
          Who maintains the policy configuration?
          Which peer implements aggregation?

     Configuring NSFNET to support aggregation
          How should routing policy be specified and 
          conveyed between regionals and Merit/NSFNET?
          How should the current routing configuration
          process be modified to support aggregation configuration?
          what will gated support  (Jeff H.)

     How should inter-domain routing protocols (ie. EGP BGP 1-3,
     BGP 4, and later IDRP) be used in the CIDR environment?
     ("CIDR, default, or die")

     IGP considerations - VLSM + Class A/B/C

     Impact of CIDR on mixed-use net (i.e. AUP)

     Multiple Supernet blocks for policies?

3.) Address allocation strategies with CIDR
    Dan Long (NEARnet), discussion leader

This session covered the address administration aspects of CIDR.
Regionals should be prepared to handle configuration of address blocks,
and to do this there should be good communication between the network
operators and those handing out the network numbers. 

The Regional-techs discussed methods of obtaining, handling and
assigning network numbers to their members from blocks obtained
from the NIC. Regionals and midlevels are advised to utilize
CIDR address allocation in order to conserve routing table space
in the Internet.

Dan's slides:

Dan Long

Class C Allocation Rules

     Organizations should get contiguous blocks of C's based on 
     the number of hosts:
                    Hosts          C's
                   <256             1
                   <512             2
                   <1024            4
                   <2048            8
                   <4096           16
                   (<8192          32) upon request

     Organization should get contiguous blocks of C's based on the
     number of subnets:
                        >1 C per subnet to max of 32

     Subnetting C's is not required but is permitted, of course.

Address Allocation Strategies with CIDR

     Goals:     (Conflicting)
     Limit use of B:s
     Maximize aggregation (CIDR/Supernetting)
     Limit explosion of C's before CIDR is ready
     Don't break Lans or Land admins

Status of A's

     77 unassigned
         -64 Reserved
         -13 available

     IANA only

Status of B's

     *9000 unassigned
     IANA says IR may allocate small blocks to regional (e.g. NIC)


Status of C's

     *2,053,000 unassigned
              - 1,048,000 reserved
              - 1,005,000 available

                split into 8 blocks:

     192 + 193  Pre-existing
     194 + 195   Europe
     196 + 197   Others
     198 + 199   North America
     200 + 201   Central + South America
     202 + 203   Pacific Rim
     204 + 205   Other
     206 + 207   Other

     n-year blocks allocated to providers for customer use 


Class B Allocation Rules

     organizations with > 32 subnets
    organizations with > 4096 hosts
    organizations with a bridged LAN with stupid hosts

      1.  1-->2  (ARP)
      2.  2-->1  (via 3)



     Company with 25 subnets

     Conglomerate with 35 subsidiaries and 100 LANs per subsidiary

     State with 400 K-12 school districts with 10 schools per district + 
     2 LANs per school



     Where's the best dividing line between B's + C's?  Now? Later?

     How do regional registries + NSP's decide which addresses to assign
     out of a block of C's?


4.) Transition to "Next Generation NSFNET"
    Dan Jordt (NorthWestNet), discussion leader

Major transitions of internet operations need quite a lot of planning
in order to happen smoothly. If nothing else, Merit has learned this
over the last few years:-). Though it is too soon to know who the
new players will be for the NSFNET vBNS, RA and NAP providers, 
enough is known to project some of the potential problem areas,
forecast changes that may occur as a result of the new architecture,
and suggest some preparations that can be made in advance.
During this meeting, Peter Ford of the NSF Network Engineering
Group presented an overview of NSF's latest thoughts about the
upcoming solicitation for the follow-on NSFNET architecture.

Dan Jordt

Transition to the Next Generation


Dan Jordt

Director, Technical Services

MERIT's Annual Regional Techs Meeting
Boulder, Colorado

Thursday, January 21, 1993


Transition to the Next Generation NSFNET

          Review of NSF's Public Draft
          Program Solicitation

          Review of FARNET's Report

          What we know about the final solicitation

          What can we (the regionals techs) do to prepare
          for the transition



Review of NSF's Public Draft Program solicitation

          Division of Responsibilities

               Very high speed Backbone Network Services provider (vBNS)
               Network Access {point (NAP) manager
               Routing Authority (RA)

          Two separate awards (NAP?RA and vBNS)
          Funding:  approximately $10M per year
          Timing:  5 year cooperative agreements to commence April 1994


Very High Speed Backbone Network Services
Provider (vBNS) must:

          Connect NAPs at 155 Mbps or better

          Switch both IP and CLNP

          Implement both BGP and IDRP
          Support multicasting and video teleconferencing

          Establish quality service metrics for network performance 

          Establish Procedures to work with NAP/RA and other network 
          personnel to resolve problems

          Participate in development of advanced routing technologies 
          (e.g. TOS  routing)


Network Access points (NAPs) manager and routing Authority

"A NAP is defined as a high speed network or switch to which a number of 
router can be connected for the purpose of traffic exchange and 

The NAP Manager?RA must :

          Establish and maintain 100 Mbps (or better) LANs or MANs as 
          AUP-Free NAPs

          Develop attachment policies, procedures, and fee schedules for
          connecting to NAPs

          Specify and ensure reliability and security standards

          Establish and maintain a Route Server supporting both IDRP and 
          BGP, and switching both CLNP and IP

          Ensure routing stability and provide for simplified routing 
          strategies (e.f. default routing)

          Establish procedures to work with vBNS Provider and other 
          network personnel to resolve problems


Networks attaching to a NAP must:

          Connect at T1 or better

          Switch both IP{ and CLNP

          Support both BGP and IDRP

          Support Video Teleconferencing

          Pay pro-rated cost of maintaining NAP and RA

          Subscribe to policies set by the RA

The Federation of American Research Networks (FARNET) Report 
on the NAP Manager/RA and vBNS Provider Draft Solicitation

>From the FARNET report:

"The transition from the current architecture to the new one will be 
extremely complex. A comprehensive  transition plan must be developed 
and managed to protect stability, and existing providers must be 
represented in the planning process."

Consensus opinions and Recommendations

   12.  NSF should place the new solicitation more clearly within the 
        NREN context.

   2.  The plans for governance and management of the new 
       infrastructure, and the process for achieving them,  
       should be stronger and more explicit. 

   3.  Transition planning must begin early and must include the 
       provider community the organizations and institutions that  
       furnish network services today.

   4.  Separate the Routing Arbiter function from that of the NAP 

   5.  Enforcement of "appropriate use: policies will continue to be an 
       issue under the new plan.

   6.  NSF's leadership role in extending networking to all of research 
       and education should be reaffirmed and continued.

   7.  Criteria for attachment to NAPs and to the vBNS are critical and 
       should be described by NSF in the solicitation.


The FARNET Report (cont.)

   8.  We recommend the following priorities in setting evaluation 
       criteria for the review of responses to the final solicitation.

          GOAL                                 Priority

       Promotion of broad infrastructure            Very High
       Interaction with community, including        Very High
              technology transfe
       Continuity and stability of services         High
       QOS* measurement, accountability             High
       Advancement of technology                    High/Medium
       Commercialization                            Medium
       Cost-effectiveness                           Medium
       CLNP availability                            Medium
       Facilitation of new applications             Medium
       Provision of video services                  Low/Medium

   9.  NAP parameters should be based on multiple dimensions and should 
       not be set solely on the basis of cost, which is only one
       component of the total NSFNET system.

  10.  We strongly recommend that the following technical requirement be 
       included in the solicitation.

               The vBNS should provide redundancy among NAPs using
               proven technology.

               NSF should require connecting current T3 network to NAPs
               as part of the transition.

               The vBNS provider should carry full routing information
               (given the limitations of route server technology).

               The vBNS provider should have a publicly available MIB.


What we know about the current solicitation.

          Final version of solicitation made public by late Feb 93

          Proposals due 75 days later

          Four supercomputer sites to be connected by vBNS

          Some number of NAPs (<<20) to be connected to vBNS

          Expect new language to firm up a few items from the draft
          solicitation, but intent is not to prescribe solutions
          Instead, NSF left requirements general enough to incent
          creative responses.

          Upside:     NSF gets wide variety of responses, maximizing
                      probability of technology and policy

        Downside:  Planners (technical and policy) must evolve within 
                   the structure prescribed by the award Timing becomes
                   an issue.


Timeline Comparisons

NIS Solicitation   T1 to T3 transition         Transition to next NSFNET 

                  * Jan 91 - Test T3 network  *
                  *         deployed          *
                  *                           *
                  * April 91 -T3 carries 1st  *
                  *'production' level traffic *
                  *                           *
                  *                           *
                  *                           *
                  *                           *   Feb 93 - solicitation
                  *                           *   made public
                  *                           *
                  *                           *
                  *                           *  May 93-proposals due
                  *                           *  June 93-review panel
                  *                           *   July 93-NSB approval
May 92-proposals due *                        *
                  * April 92-T3 carries 1/2 of*  Sept 92-award made
                  *    all NSFNet traffic     *      public
                  *                           *  Nov 93-prototype
                  *                           *         deployed
                  *                           *
                  *                           *  Jan 94-First production 
                  *                           *
                  *                           *
Jan 93-Announcement* of Nov 92-All networks  *  April 94-Transition 
award to ATT, et,al.         use T3              complete?



Regionals will be given some funds for connecting to a Network Service 
Provider (NSP) of their choice (process t.b.d.) or to a NAP

Subsidy funds for connecting to the NSP or a NAP will decline to $0 by 
1996.  (e.g. 100% 1st yr,  60% 2nd yr,...).

Can change network provider selection


What can we do to prepare for the transition?

We must focus early on communicating any technical concerns to our 
management and NSF, specifically relating to open issues that are yet 
unresolved and may affect each of our network's:

          Operational viability

               How do I coordinate inter-regional diagnosis and repair 
               of end -to-end connectivity problems:

               Will I have to deploy new trouble ticket system?

               What are the performance and availability requirements 
               for NAP participation:  How will they be measured and
               reported:  Who specifies these criteria:  Who reviews
               submissions for connection?

               If I connect to NAP, will I have to deploy my own
               equipment there:  If so, will there be someone there to 
               act as my agent for repair and maintenance?

          Engineering plans

               How many NAPs will there be:  (Will there be on close to 

               How should our network attach?  Via a BB provider?  
               Directly to the NAPs?

               Should I build a redundant connection to the NAPsL To my 
               BB provider:

               Am I able to meet minimum requirements to connect to the 
               Naps?  (IDRP, CLNP, video, operations coverage)
               Do I need to negotiate, deploy, and manage N2 routing 
               relationships if I connect directly to a NAP?

               To what extent must I be prepared to enforce policy 
               routing:  What  choices will the RA provide my network or 
               our clients regarding policy routing?
               policy routing?


What can we do to prepare ...(cont.)

          Technical budgets
             -  Personnel:       How much extra operational staff will I 
                                 need?  Engineering staff?

             -  Capital:            Will I have to purchase new (or 
                                    updated) boder routers:  New network 
                                    management stations and/or trouble
                                    ticket systems?

             -  Operations:      What new on-going costs must I budget 
                                 for to pay for any new circuits?  To 
                                 pay for pro-rated NAP/RA costs?  How
                                 will these costs be determined?

We will need to significantly increase our intercommunications among 
attaching mid-levels and BB service providers to facilitate

          -  technical transition planning and engineering

          -  operations coordination, diagnoses and repair of peer 
             routing instabilities, trouble ticket passing

          -  end -to-end connectivity problem resolution

          -  diagnosis and repair of peer routing instabilities

After the contract has been awarded, we must be prepared to 
immediately  participate in deployment engineering and planning with 
all other players (vBNS, NAP, RA awardees, NSF, other mid-levels).


The view from a few steps back...

Representative Boucher (chair of the House subcommittee that oversees 
NSF) from a speech made to National Net 92 on March 25:

"In developing the detailed plan for transition to the NREN, I think a 
few basic principles must be observed.

          First, the benefits of this network should flow to the nation 
          broadly and not just to a narrow few.
          The developments of markets and the involvement of industry    
          [...] is essential.

          {The} development of the technology and management of the NREN 
          should push the limits necessary to stimulate and meet the 
          demand for services while ensuring reliability and stability to 
          the users.

          And finally, the many communities that participate in the 
          development and use of the NREN must have a voice in planning
          for the network and for its long-term management." 

5.) Virtual Routes
Presenter: Bilal Chinoy (SDSC)

Bilal presented a paper that was published in ACM CCR in 1992, on his
analysis of routing traffic on the NSFNET backbone.  His premise was
that one can take advantage of destination locality in Internet traffic
to design more compact forwarding tables. That is, one can keep the
size of forwarding tables at about 20% of the total potential
destinations and yet deal efficiently with all offered traffic.

5.) Current Status /Problems
    Mark Knopper (Merit), discussion leader

This session covered the NSFNET network status, including recent events
such as dismantling the T1 backbone, deployment of FDDI cards, current
status and future plans. Discussion focused on the routing table size
in the backbone routers operated by ANS, and how to manage this by
phasing in the CIDR architecture.

Mark's slides:

Mark Knopper

Current Status/Problems

-Traffic Statistics
-T3 Network Status
-Upcoming changes on backbone
-MBONE routing
-Policy Routing Database Changes


T3 Network Status

-FDDI installations:

                     NEARnet        (ENSS 134)  12/8
                     Argonne        (ENSS 130)  12/8
                     FIX-E          (ENSS 145)  12/11
                     SESQUInet      (ENSS 139)  12/22
                     Cornell        (ENSS 133)  12/24
                     FIX-W          (ENSS 144)  12/29
                     NWnet          (ENSS 143)  12/29
                     Merit          (ENSS 131)   1/8

                     WestNet     (ENSS 142)   2/1

-Rcp_Routed and FDDI interAS metrics

-10000 route capacity on RS/960 cards


Upcoming Changes

- Near Term (Feb-March)
          Internal route aggregation ("CIDR")
          Route caching
          AIX 3.2
          Gated with BGP 3

-Medium Term (March-June)
          CLNP forwarding
          Dual IS-IS
          BGP 4

-Longer Term (July-September)


IETF and MBONE connectivity

-ANS testing T1 data compression devices
-Alternatively second T1 will be installed at OARnet
          Recommend that all NSFNET Regionals support MBONE
          mrouted machine attached to DMZ,

          Distribute multicast from that machine to sites on campus
          and within regional.

          Obtain feed from another site with direct T3 backbone


Policy Routing Database Changes

-New Informix/RS6000 database system almost ready(really).
-Merit is working on automating the configuration process:
          -Parsable NACR
          -Shared Whois Project (SWIP)
          -"Config Ticket" system

-Upcoming work
          -Policy specification
          -Route server configuration and further experiments


-Backbone performance and reliability is good.
-What has not gone well?
-What else should Merit/ANS be working on now?

Organizations represented at the meeting:

CNRI, 3com, Wellfleet, IBM, MILnet, NIST, Sprint, US West, PSI,
AlterNet, ESnet, ANS, SDSC, Merit, MichNet, WestNet, CICNet, LANL/NSF,
BARRnet, NorthWestNet, Cornell, NEARnet, SESQUInet,
NCAR, Los Nettos/ISI, NCSA, SURAnet, and MSC.

More information about the NANOG mailing list