Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

Why Open Port Monitoring is Both an Essential and Flawed Security Control

In the cybersecurity world, testing for the existence of exploitable vulnerabilities isn’t always an exact science. Checking for open ports (CIS Control 9 - Limitation and Control of Network Ports, Protocols and Services) sounds simple enough, but the reality is a long way off.

Each protocol, like HTTP for web services or RDP for Microsoft Terminal Services, have a default port assigned. The port number is just another level of addressing so that connections to an IP Address can be paired up with the underlying service. 

While the protocol must always match the service, you can go freestyle and assign your own choice of port number. Indeed, this is a standard security best practice intended to throw hackers off the scent. Using the default port for the service removes any need to guess which protocol to use, so using a random port serves as interference. 

This entwined relationship between service, protocol and port is important to understand – you can’t have one without the others. Remove the service, you close the port. The main reasons why monitoring open ports is ordained a key security control:

  1. The more open/accessible we make a system, the greater the attack surface. With new exploits being discovered every day, anything that reduces the potential for attack, the better
  2. Where a service is needed, and there is a choice of ports/protocols offered i.e. HTTP or HTTPS using TLS 1.2, we want to use the secured variant
  3. By extension, we also want to ensure that the unencrypted channel is never used and disable it

While the underlying purpose of the security control is clear, problems start because most organizations (and auditors) take a literal interpretation of the requirement. By starting with the question ‘How do I check for open ports?’ the inevitable conclusion is to use a port-scanning technology, like Nmap or a vulnerability scanner.

This is cool because a port scan is really quick and easy to setup. Just dial-in the IP address and port ranges to scan and let it run. The issue is that measuring open ports in a way that is consistent and reliable is actually way more difficult than it sounds.

For instance, in no particular order, problems are presented by: protocols that use random or ephemeral ports, ports that open and close as manual or on-demand services start/stop, and of course, firewalls that are designed to control/block traffic.

All this is before you start trying to test the existence of UDP ports which are more challenging because unlike their TCP cousins, UDP services are notoriously reluctant to respond when tested during a scan.

What’s the alternative? Go direct? A netstat command run on the host will provide a reliable output, including visibility of UDP Services, but results will still be cluttered with ephemeral ports.

Instead, we could flip the control around and instead focus on the services dimension first. Again, some vulnerability scanners can report on services using a credentialed scan. Although host-resident, system integrity monitoring technology delivers a superior solution. This option not only gathers details of installed services with their running and startup states, but by being host-resident, also has the advantage of being able to continuously track changes to service configuration settings.

This means that both for change control (reporting any drift from the baseline configuration build), and also for breach detection (reporting unexpected new services and processes), the true intent of the security control is being delivered.

Ultimately, both the port and service dimensions should be tested and baselined with changes tracked, but the argument to reverse the priority of the security control, focusing more on services than open ports, carries merit. 

Security controls are subject to a ‘bang-for-buck’ rating like anything else, and one that is easier to operate, with easier to interpret results, will always be more effective than a more technically challenging parallel. 

While breaches continues to increase, anything that makes security best practices easier to implement and us more secure should be welcomed.

What’s Hot on Infosecurity Magazine?