I don't need no stinking firewall!

George Bonser gbonser at seven.com
Tue Jan 5 22:43:55 CST 2010


> -----Original Message-----
> From: nanog-bounces at nanog.org [mailto:nanog-bounces at nanog.org] On
> Behalf Of Robert Brockway
> Sent: Tuesday, January 05, 2010 1:25 PM
> To: NANOG list
> 
> On Tue, 5 Jan 2010, Dobbins, Roland wrote:
> 
> > Putting firewalls in front of servers is a Really Bad Idea - besides
> the
> 
> Hi Roland.  I disagree strongly with this position.
> 
> > fact that the stateful inspection premise doesn't apply (see above),
> 
> The problem is that your premise is wrong.  Stateful firewalls
> (hereafter
> just called firewalls) offer several advantages.  This list is not
> necessarily exhaustive.
> 
> (1) Security in depth.  In an ideal world every packet arriving at a
> server would be for a port that is intended to be open and listening.
> Unfortunately ports can be opened unintentionally on servers in
several
> ways: sysadmin error, package management systems pulling in an extra
> package which starts a service, etc.  By having a firewall in front of
> the
> server we gain security in depth against errors like this.

Most large operations don't have individual servers accepting incoming
traffic.  They generally have a farm of servers behind a load balancer.
The load balancer itself provides this layer of security.  You configure
a service on a specific address/port/protocol on the load balancer and
bind that to a group of servers on the other side of the balancer.  It
doesn't matter what you have open on the server, the server is not
directly accessible from the Internet.  If you don't have the service
configured on the load balancer, traffic simply gets dropped on the
floor at the front door.  Most load balancers these days are even
configurable to handle (or not) specifically formatted requests so you
can craft ACLs based on the actual requests if you need to.  In other
words, for all practical purposes these days, the load balancer IS a
stateful firewalling proxy.  Having another one in front of the load
balancer that is simply configured to pass your production traffic to
the load balancer is a waste of money and resources in addition to a
potential bottleneck.  This is particularly true if you are a
client/server operation where you are handling traffic on custom ports
using custom protocols that a firewall isn't going to know how to
inspect anyway.

> (2) Centralised management of access controls.  Many services should
> only be open to a certain set of source addresses.  While this could
be
> managed on each server we may find that some applications don't
support this
> well,
> and management of access controls is then spread across any number of
> management interfaces. Using a firewall for network access control
> reduces
> the management overhead and chances of error.  Even if network access
> control is managed on the server, doing it on the firewall offers
> additional security in depth.
>

In the case where a service is open to only a certain set of source
addresses such as when providing a specific service to a business
partner, the ACL can just as easily be configured on the routers at the
ingress to your network.  Any traffic that you KNOW you are going to
drop should be dropped as soon as possible so you don't waste resources
forwarding the traffic to the firewall.  Ingress ACLs are the best place
for that, in my opinion.  A modern layer2/3 switch can drop traffic in
hardware without a lot of resource consumption.

> (3) Outbound access controls.  In many cases we want to stop certain
> types
> of outbound traffic.  This may contain an intrusion and prevent
attacks
> against other hosts owned by the organisation or other organisations.
> Trying to do outbound access control on the server won't help as if
the
> server is compromised the attacker can likely get around it.

Yes.  Outbound access control IS a valid location for a firewall but
that is a separate path from your production service traffic, or at
least it probably should be.  A modern load balancer is capable of
source NATing the traffic so the connections to your server appear to
come from the load balancer itself and they can insert a header in the
transaction that includes the original connecting IP address of the
client so the server still has access to that information if it needs
it.  When the server needs to initiate an outbound connection, it has a
default gateway to a path that eventually leads to a firewall on the way
out.

> (4) Rate limiting.  The ability to rate limit incoming and outgoing
> data
> can prevent certain sorts of DoSes.

This can be done on both the load balancer or the routers depending on
the kind of traffic you want to limit.

> (5) Signature based blocking.   Modern firewalls can be tied to
> intrusion
> prevention systems which will 'raise the shields' in the face of
> certain attacks.  Many exploits require repeated probing and this
> provides
> a way to stop the attack before it is successful.

Modern load balancers are capable of this.  You can program it for a
particular query string, for example, and if you see it, you can ignore
it, redirect it, log it, whatever.  The line is also blurring in many
routers these days between what is a firewall function and what is a
router function.  IOS, for example, has a large array of firewalling
features available.  

Many of the problems associated with firewalls in your production path
aren't going to show themselves if you have less than say a million or
two users.  When you get to tens of millions of users with literally
billions of transactions, things get a little different.  I have run a
lot of firewalls out of resources over the years ... Netscreens,
PIX/ASA, even special purpose units hand-built using OS kernel
firewalling.  And that is not even in the face of a DoS, that is with
normal production traffic.   In practically every case the response is
the same ... start disabling features on the firewall for the inbound
production traffic flow because you are going to allow that anyway.  Put
any "blocks" on the provider edge ingress ports and drop the traffic
before it even gets to the firewall or load balancer.  At that point the
majority of the traffic through the firewall becomes "allow <protocol>
any <service IP> eq <service port>" for the address/port/protocol of
your service VIPs so if you are allowing "any", why are you paying for
an extra colo heater?  The load balancers these days have methods just
as effective as firewalls for dealing with things like syn attacks, they
can rate limit requests, they can buffer them, they can drop them,
routers can limit icmp.  There are a lot of tools in the drawer.  Yes,
you have to take some of the things that were done in one spot and do
them in different locations now, but the results are an amazing increase
in service capacity per dollar spent on infrastructure.





More information about the NANOG mailing list