[Tfug] Topology questions

William Stott will at stottland.net
Mon Oct 8 22:24:49 MST 2012


When your FTP / HTTP server gets owned, they have access to control the
bypass and configure their own routing on your system. Hence, your network
security now relies on your services, regardless of your firewall or router.
So, security isn't really the best interest of a multihomed configuration
(even if you don't intend to route the traffic). This is why I hold my
ground on a single service entry point, non-routable based on hardware
configuration, with a proxy service to help provide the security that you
need beyond layer 4. I can explain further, but I refused to write an email
longer than I would want to read.

Thank you,

Will

-----Original Message-----
From: Bexley Hall [mailto:bexley401 at yahoo.com] 
Sent: Saturday, October 06, 2012 11:10 PM
To: Tucson Free Unix Group
Subject: Re: [Tfug] Topology questions

Hi Will,

On 10/6/2012 10:05 PM, William Stott wrote:
> Wow. You are working yourself into a situation where complexity 
> overcomes security and maintenance.

Actually, I see the multihomed FTP/HTTP/etc server as a simpler
configuration to audit (and, thus, maintain)!

FW ---+-- RTR ---+---
      |          |
      +-- FTP ---+

   exposed     internal (1 shown)

The firewall can allow *only* FTP/HTTP/etc. incoming connections directed to
the "exposed" interface on that server -- no reason for the router to *ever*
have to deal with incoming service requests on those ports.

At the same time, the router NEVER has to pass requests from any of the
internal internets through to that "exposed" internet so that they could
reach the FTP/HTTP server.

I.e., the exposed port on the FTP/HTTP server should *only* see requests
from the outside world.  The exposed port on the router should *never* see
requests from the outside world.  You don't have to merge this
internal/external forwarding into the router; the cabling enforces the
distinctions.

Similarly, the router can *prevent* the wireless internet from ever
accessing the internal internets -- it can always be treated as "potentially
hostile" (even if it employs security).

ALL of the traffic on the exposed internet is considered "suspect".
There's never a chance of the router passing FTP/HTTP requests from an
internal internet onto that internet.  A protocol sniffer could trivially
detect any violations, here -- because none of the internal subnets should
ever appear on the exposed internet for any reason.

The router's rules never have to address the possibility of those addresses
appearing on that interface (i.e., "block unless it's an FTP request coming
from inside").

This lets me size the firewall hardware for *just* the bandwidth of the WAN
connection.  Similarly, the router can be sized for more "typical" usage
between the internal networks (as outlined previously).

At the same time, the FTP/HTTP server can be sized to handle the heavier
demands placed on it by the *internal* clients (who certainly don't want to
have to operate at the slow WLAN speed!)

Since (internal) FTP/HTTP traffic doesn't go *through* the router (which
would put that traffic on the exposed internet), the name service (operating
*in* the router) can ensure that FTP.MyDomain resolves to X.X.1.X for hosts
issuing queries on internal internet 1; X.X.2.X for hosts qurying the name
serve from the internal internet 2; etc.  I.e., the router doesn't even
*see* that traffic.

Likewise, I can ensure inetd doesn't spawn anything *other* than FTP/HTTP on
that exposed interface.  And, that the FTPd and HTTPd listening on that
interface only allow limited accesss to the resources available on the
server.

> Maybe you should think about sticking the web services between your 
> firewall and router "DMZ network," configure actual proxy services for 
> external use, and call it a day. "Multi-homed" anything that isn't a 
> router or firewall is normally not in your best interest, but more of 
> a band-aid to a real solution (even if that means using windows DHCP 
> over Linux *unnecessary stab*).

Note my use of multihoming is intentionally oriented around *simpler*
security/configuration -- not providing redundant communication paths.
If an interface on the router goes down, there is nothing that the FTP/HTTP
server can do to route traffic *around* it.  Likewise, if an interface on
the FTP server goes down, there's nothing the router can do to provide an
alternate route -- the isolation of the networks is the whole goal of
keeping them separate along with their *traffic*.

Using the firewall and router to restrict what can appear on that exposed
internet, I think, simplifies the rule writing for everything that has to
provide services to internal and external clients (by keeping things
separate).

Dunno.  I'll try and build a server next week and see how "simple"
the rules turn out to be.  And, how hard it will be to add new services --
as well as changing what's available where.

--don


_______________________________________________
Tucson Free Unix Group - tfug at tfug.org
Subscription Options:
http://www.tfug.org/mailman/listinfo/tfug_tfug.org





More information about the tfug mailing list