A Tribute To Our Oldest And Dearest Of Friends – The Firewall Part 2
In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.
Some in the field have advocated that the firewall has seen its day and its time for the knackers yard. Firewalls, when seen as something as in the movies, as in "breaking through", "punching through" the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly.
One doesn't "break through" a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.
There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings?
Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate.
The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.
Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether...partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration.
These engagements were delivered in a way that was something like "why do you leave these ports open?", "because this application X needs those ports open"... and that would be the end of that, because the service providers didn't know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already "why are we here?" faces in the room.
As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement - usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.
Firewall rulebases reveal a large part of the true "face" of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager's secretaries.
Nine times out of ten, when you ask to see firewall rules, faces will change in the room from "this is a nice time wasting meeting, but maybe I'll learn something about security" to mild-to-severe discomfort. Discomfort - because there is no hiding place any more.
Network and IT ops will often be aware that there are some shortcomings, but if we don't see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application - related issues.
To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:
- Internal private resources 1-to-1 NAT'd to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. ...) has been allocated a public IP address that is routable from the public Internet and clearly "visible" on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they "see" (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won't take long to learn a great deal about the internal network make-up in these cases. This NAT'ing problem would be a severe problem for most businesses.
- A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls "silently drop" packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn't block access to this port - there will be a quick response to the effect "I don't want to talk, I don't know how to answer you, or maybe you're just too boring" - this is bad but at least there's a response. Let's say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as "filtered" but 10000 as "closed". "Closed" sounds bad but the attacker's eye light up when seeing this...because they have a port with which to bind their shell - a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we're trying to achieve ultimately.
- Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.
- VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can "see" and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ - at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network - a nice encrypted tunnel for syphoning off sensitive data.
Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic - as in none whatsoever. There are several major concerns when it comes to egress considerations:
- Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this...its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks - with a source IP address registered under the ownership of the source of the attack - and this is no small matter.
- Your own firewall can be DOS'd: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT'ing can fail to service legitimate requests. This wouldn't happen if these packets are just dropped.
- It will be an essential function of most malware and manual attacks to be able to dial home once "inside" the target - for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.
Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.
Some specifics on egress:
- Netbios broadcasts reveal a great deal about internal resources - block them. In fact for any type of broadcast - what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties - Cisco Discovery Protocol for example.
- Related to the previous point: be as specific as possible with subnet masks. Make these as "micro" as possible.
- There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.
- DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device "allowed out". Direct connections from other devices to public Internet services should be blocked.
- SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.
As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, "Building Internet Firewalls" illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.
Cross-posted from Security Macromorphosis