Interview: Dave Klein, Director Engineering and Architecture, Guardicore

Data center security has become a hot topic in the information security industry over the years. Data centers can aid private companies in moving to the cloud and handling digital transformation, allowing them to do so at a lower cost than if they were running their own centralized computing networks and servers, providing services such as data back-up and recovery, storage and management.

Often complex industrial-scale operations, large data centers house huge numbers of computing systems, associated components and process colossal amounts of information – some of which can be sensitive or proprietary, including customer data or intellectual property.

Therefore, ensuring data is kept safe and secure is of paramount importance to any data center, but it is not without its challenges.

Dave Klein is senior director of engineering & architecture at Guardicore. He has over 20 years of experience working with large organizations in the design and implementation of security solutions across very large scale data center and cloud environments. At Guardicore, Dave works with customers in architecture and implementation of advanced data center and hybrid cloud security solutions for the rapid detection, containment and remediation of security breaches.

Infosecurity recently spoke to Klein about the current state of data center security, the risks that persist and strategies for improvement.

How would you describe the current state of data center security?

Data center security has been proven to be tenuous as of late. Why? Well it boils down to three main reasons. What is interesting is that even for those rare enterprises that are still cloud averse – two out of three of these reasons remain directly pertinent.

The rapid adoption of DevOps methodologies: Both the business and IT sides of organizations are pursuing IT innovation as a way to deliver business initiatives and provide competitive delineation. These business/competitive drivers, in combination with cost saving automation, have become paramount.

While speed of solution delivery has continued to accelerate and more and more processes have become automated, security practices have not kept up. This has left deployments vulnerable. Furthermore, too often these pushes into accelerated deliveries leave the CISO out of the conversation. When they are invited, their security teams have often had outdated, legacy tools that are from yester-year and focus too heavily on perimeter security rather than lateral movement-based security, dealing with work flows.

Heterogeneous platform complexity: Most enterprises I deal with have a whole array of platforms that they support out of necessity. This leads to management hassles, visibility and segmentation issues. They often have everything from legacy to end-of-life Unix, Windows and Linux platforms straight through to modern virtualizations, public and (for most) private clouds and containers. This leads to the difficulty in securing and managing these disparate systems. Visibility across all of their platforms in a seamless fashion seems difficult.

Furthermore, where traditional segmentation like VLANs, ACLs, firewalls and (for cloud adopters) security groups were fine in the past, the manual implementation requirements of these approaches, often running into months, is no longer acceptable either in cost or in terms of time to reach protection. They also do not provide process, identity and Fully Qualified Domain Names (FQDN) policy capabilities essential to reducing the attack surface effectively.

The shared cloud responsibility model is not well-understood: When I sit with CISOs and CIOs, they often still think that it is the cloud vendor who has the lion’s share of responsibility for security. However, this isn’t true. I’d suggest that 90% of the responsibility falls on the enterprise itself.

Why is it that data centers are targeted by cyber-attacks, and what are the biggest challenges surrounding the security of a data center’s information?

I did some research earlier this year looking at the largest attacks to hit data centers in the past few years. All were ‘direct hits’ meaning – traditional spear phishing. No other end user means were utilized. They instead all attacked directly. In combination with DevOps seeing faster, more automated development and deployment, with poor visibility and limited segmentation capabilities, there are fundamental hygiene issues that plague enterprises. Yet, many of these challenges can be overcome without significant burden. 

What are the most effective steps infosec leaders can take to improve security?

A great deal can be achieved in just a matter of weeks. Certainly organizations should:

  • Shore up basic hygiene, including vulnerability scanning and patching. Implement strong password enforcement combined with dual factor authentication. There is also a need for better elevated account control/expiration procedures, better certificate management practices and control of enterprise services like DNS, Remote Access (SSH/RDP), AD and other critical services
  • Seek out visibility and next generation software-defined segmentation solutions that are platform agnostic and can also be wrapped into the DevOps model
  • Since most enterprises have adopted automation in the form of playbooks, it would be very easy to incorporate the above two into your existing scripts. Everything from doing vulnerability, patching, even labeling/segmentation into these scripts

What’s Hot on Infosecurity Magazine?