Embracing Your Legacy: Protecting Legacy Systems in a Modern World

Written by

Legacy infrastructure is still a crucial part of enterprises across many industries: healthcare organizations relying on Windows XP, Oracle databases running on old Solaris servers, business applications which require Linux RHEL4 and ATMs running decade-old Windows versions are just a few examples. Even legacy Windows 2008 servers are still being used in many organizations.

Legacy infrastructure is prevalent in data centers, especially in large enterprises. For example, it’s common to see older AIX machines doing banks’ mission-critical big data transaction processing, or endpoints such as ATMs, medical devices and Point of Sale systems running operating systems which have long reached end-of-life. Modernizing applications running on this infrastructure is an ongoing, painful effort, which usually takes years to implement.

Unsecure Legacy Compromises Your Entire Data Center

The risk posed to the organization by improperly securing legacy systems is wide and goes beyond the legacy workloads themselves. An unpatched device running Windows XP, for example, can be easily exploited to gain access to any data center. Just earlier this month, we got a painful reminder of this risk, when Microsoft released a security update for a severe remote code execution vulnerability in old operating systems such as Windows XP and Windows Server 2003.

If attackers gain access to this one unpatched machine (which is much easier than hacking a modern, well-patched server), they can laterally move deeper into the network. As data centers become more complex, expand to public clouds, and utilize cutting-edge technologies such as containers, this risk grows. Interdependencies among different business applications (legacy and non-legacy) become more complex and dynamic, making traffic patterns harder to control from a security perspective. This gives attackers more freedom to move undetected across different parts of the infrastructure.

Old Infrastructure, New Risk

Legacy systems have been with us for years – yet the risk that they pose is constantly increasing. As organizations go through the process of digital transformation, modernizing their infrastructure and data centers and moving to hybrid clouds, attackers have more ways of gaining access to critical core applications.

An on-prem business application on a legacy system, which was once only used by a small number of other on-prem applications, might now be used by a larger number of both on-prem and cloud based applications. Exposing legacy systems to an increasing number of applications and environments increases their attack surface.

So the question is – how can we reduce this risk? How do we keep the business-critical legacy parts secure, while enabling the organization to rapidly deploy new applications on modern infrastructure?

Identifying the Risk

The first step is to properly identify and quantify the risk. Using existing inventory systems and “tribal knowledge” is probably not enough – you should always strive to get a full, accurate and up-to-date view of your environment. For legacy systems, obtaining a correct view can be especially challenging, as organizational knowledge about these systems tends to fade over time.

Security teams should use a good visibility tool to provide them with a map that answers these questions:

  1. Which servers and endpoints are running legacy operating systems?
  2. To which environments and business applications do these workloads belong?
  3. How do these workloads interact with other applications and environments? On which ports? Using which processes? For what business goal?

The answers to these crucial questions are the starting point to reducing your risk. They show you which workloads present the most risk to the organization, which business processes might be hurt during an attack, and which network routes an attacker might use when laterally moving between legacy and non-legacy systems across clouds and data centers. Users are often surprised when they see unexpected flows to and from their legacy machines, leading to more questions about security posture and risk.

A good visibility tool will also help you identify and analyze the systems that need to be migrated to other environments. Most importantly, a visual map of communication flows enables you to easily plan and deploy a tight segmentation policy around these assets. A well-planned policy dramatically reduces the risk posed by these older machines.

Reducing the Risk with Micro-Segmentation

Network segmentation is widely accepted as a cost-effective way to reduce risk inside data centers and clouds. Specifically, using micro-segmentation, users can build tight, granular security policies that significantly limit an attacker's ability to move laterally across workloads, applications, and environments.

When dealing with legacy infrastructure, the value of good visibility and micro-segmentation becomes even clearer. Older segmentation techniques such as VLANs are hard to maintain, and often place all similar legacy systems in a single segment – leaving the entire group wide open for attack in case of a single breach. In addition, firewall rules between legacy VLANs and other parts of the data center are hard to maintain, leading to an over-permissive policy which increases overall risk. With proper visualization of both legacy and modern workloads, security teams can plan a server-level policy which only allows very specific flows between the legacy system themselves, and between legacy and more modern environments.

Coverage is Key

When choosing a micro-segmentation solution, make sure that the one you select can be easily deployed across your entire infrastructure, spanning all workload types across data centers or clouds. Segmenting modern applications, while leaving legacy systems behind, leaves a high-risk gap in the security of your infrastructure.

Personally, I believe that security vendors should embrace the challenge of covering all infrastructure as an opportunity to help their customers with this growing pain. While some vendors focus only on modern infrastructure, dropping support for older operating systems, I believe that good and mature security platforms should cover the entire gamut of infrastructure.

Conquering the Legacy Challenge

Legacy systems pose a unique challenge for organizations: they're critical for business, but harder to maintain and properly secure. As organizations move to hybrid clouds, and their attack surface expands, special care needs to be taken to protect legacy applications. To do this, security teams should accurately identify legacy servers, understand interdependencies with other applications and environments, and control the risk by building a tight segmentation policy. Leading micro-segmentation vendors should be able to cover legacy systems without sacrificing capabilities across any other type of infrastructure. Guardicore Centra provides visibility and micro-segmentation capabilities to the entire infrastructure – old and new – leaving you without the burden of handling blind spots.

Brought to you by

What’s hot on Infosecurity Magazine?