Greenfield Access Network

Scott Helms khelms at zcorum.com
Thu Jul 31 13:54:16 UTC 2014


"What is the ideal way to aggregate the 40 10G connections from the uplinks
of the chassis? I would guess a 10G switch since 10G ports on a router
would be much more expensive?"

Definitely aggregate into a switch first unless you want to run a Layer 3
switch as your router, which I don't recommend.


"Which router is recommended to handle 4 10G internet connections with full
tables, and then at least 4 10G ports going back to the 10G aggregation
switch?"

Your math is a little backwards, its very unlikely that you're going to
have 40 Gbps of Internet (or other interconnection) for the router to
actually have to process.  What is the average provisioned speed for each
of the 10k PON ports?  What over subscription rate are you planning for?
 What, if anything, will you be carrying on net, ie bandwidth consumption
that won't come from or go to the public Internet?  Your own video, voice,
or other service are examples of things that are often on net.  In any case
you're probably in the ASR family with Cisco and I can't remember
the equivalent from Juniper.


How do you handle IP address management? a /20 is only 4096 IP addresses,
but the network would have potentially 10,000 customers. Assume that
getting more space from ARIN is not an option. Is CGN an option?

CGN is the option of last resort IMO, but you may have to consider it.  A
better approach is to see if your backbone providers will agree to give
some blocks that you can announce and use those blocks for dynamic
customers only.  Your static IP customers should come from your direct ARIN
allotment in case you need to choose a new backbone provider, which is
extremely common over time.


"Dynamic IP
addresses? DHCP?"

DHCP with enforcement from the shelves.  All the major OLT vendors support
doing this so that a customer can only use the address assigned to him by
DHCP and nothing else, except for those customers that you choose to hard
code.  Make most of your "static" customers actually DHCP reservations and
only hard code those that you must.

"How do you separate users and traffic? VLANs, Service VLANs, Per Customer
VLANs, Usernames? Passwords? PPPoE? MAC Separation?
Is a BRAS or BGN functionally really needed or are these older concepts?"

DHCP, with Option 82 logging for the circuit ID is the better path than a
BRAS (PPPoE) these days.  Here's a paper we put together on that topic a
while back:

http://www.zcorum.com/wp-content/uploads/Why-Should-I-Move-from-PPPoA-or-PPPoE-to-DHCP.pdf

Depending on your OLT vendor you can either use their built in port
isolation or QinQ tagging, both are reliable and scalable, just ask your
vendor which is the best option for your specific gear.



"If CGNAT or DHCP is needed, what will host the CGNAT or DHCP service? The
core router, a linux box, or something else?"

I wouldn't have those two services connected personally, though there are
hooks for some of the CGN boxes to talk to DHCP servers.  I would hope you
can get another 6k addresses and avoid the need for CGN altogether.  Having
said that, have you tested your OLTs and ONTs for IPv6 interoperability?
 If they don't handle it well then you're going to have to think about
alternatives like 6RD (http://en.wikipedia.org/wiki/IPv6_rapid_deployment)

For DHCP at your scale you can run ISC DHCP (
http://www.isc.org/downloads/dhcp/) which is the most common open source
DHCP daemon if you someone who can take care of a Linux server, parse the
Option 82 information for logging, and handle the configuration of the DHCP
daemon itself.  Otherwise you might want to look at commercial products
designed for the service provider market like Incongito's BCC and Cisco's
BAC (CNR replacement)

http://www.incognito.com/products/broadband-command-center/
http://www.cisco.com/c/en/us/products/cloud-systems-management/broadband-access-center/index.html


"What about DNS?
Is a firewall needed in the core?
What else is needed?"

There are two kinds of DNS, caching (recursive) and authoritative.  The
first is what your customers will use to resolve things on the Internet and
the second is used to provide caching name servers on the Internet with
information about domains you control (are authoritative for).  The first
needs good performance, availability, and scalability since your customers
will use your caching name servers constantly.  Most people can run BIND at
your scale, again if you have someone with Linux experience, but there are
other alternatives.  PowerDNS has both caching and authoritative modules
and there are some commercial offerings out there both as cloud hosting and
local deployments.  Your backbone provider will also often have caching
name servers your customers can use, but the quality varies quite a bit.
 You can also, especially at first, leverage some of the free offerings
like Google's DNS.  I don't recommend firewalls for service provider
networks, but you should make sure your gear can run (and is configured to
do so) BCP 38.


Scott Helms
Vice President of Technology
ZCorum
(678) 507-5000
--------------------------------
http://twitter.com/kscotthelms
--------------------------------


On Thu, Jul 31, 2014 at 9:23 AM, Colton Conor <colton.conor at gmail.com>
wrote:

> If a new operator or city is building a greenfield access network from the
> ground up, what software and hardware is needed in the core network to
> provide and manage residential and business internet services similar to
> the likes of AT&T, Comcast, and Google Fiber? Television and Telephone
> services are not to be considered only internet.
>
> Assume hypothetically the operator already has the following in place:
> 10 GPON OLTs Chassis from an access vendor in 10 POPs around town (each POP
> has 1 Chassis). Each OLT Chassis has 4 10G Uplinks back to the core.
> Dark fiber going from the POP locations back to the core location
> Assume a 32:1 way split, and each OLT chassis has enough ports populated to
> serve the area.
> 10,000 GPON ONTs. The ONTs can be put in routed gateway or bridged mode.
> Assume you are building a network designed to serve 10,000 subs
> All the fiber splitters, ducts, fiber, etc connecting the OLTs to the ONTs
> is already in place
> ASN from ARIN
> /20 of IPv4 space and /32 of IPv6 space from ARIN
> 4 burstable 10G internet connections from 4 tier 1 internet providers
>
> Questions are:
>
> What is the ideal way to aggregate the 40 10G connections from the uplinks
> of the chassis? I would guess a 10G switch since 10G ports on a router
> would be much more expensive?
> Which router is recommended to handle 4 10G internet connections with full
> tables, and then at least 4 10G ports going back to the 10G aggregation
> switch?
> How do you handle IP address management? a /20 is only 4096 IP addresses,
> but the network would have potentially 10,000 customers. Assume that
> getting more space from ARIN is not an option. Is CGN an option? Dynamic IP
> addresses? DHCP?
> How do you separate users and traffic? VLANs, Service VLANs, Per Customer
> VLANs, Usernames? Passwords? PPPoE? MAC Separation?
> Is a BRAS or BGN functionally really needed or are these older concepts?
> If CGNAT or DHCP is needed, what will host the CGNAT or DHCP service? The
> core router, a linux box, or something else?
> What about DNS?
> Is a firewall needed in the core?
> What else is needed?
>
> Is there a guide out there somewhere? I know many cities are looking at
> building their own network, and have similar questions. Access vendors are
> willing to sell gear all day long, but then they leave it up to the
> operator/city to answer these harder questions.
>
> How would you build a access network from the ground up if you had the
> resources and time to do so? Would you even use GPON? Even if GPON was not
> used and another access technology like AE, VDSL2, or wireless was used I
> think many of these questions would be the same.
>



More information about the NANOG mailing list