Why Network Visibility is at the Heart of Digital Transformation

Written by

A failure to transform digitally, and keep pace with the likes of Airbnb and Uber, is often cited as the main reason over half of the Fortune 500 companies have disappeared since 2000. But to successfully execute a digital transformation, companies must see, manage and secure the complex digital applications that are at the heart of their evolution – whether they are customer-facing, enabling a mobile workforce, or running critical back-end operations.

Rather than monitoring a single monolithic application, companies must now monitor hundreds or thousands of component services — all interacting in intricate patterns. This makes it much more difficult to locate where bottlenecks are occurring, know if suspicious data is flowing out and understand what the user is experiencing. 

Digital companies can reach new customers immediately while incurring almost no costs. They can compete in new sectors and can transform quality and productivity by converging new technologies and data sources. They can streamline business processes and introduce new business models that redefine industries. These ambitious goals are ultimately realised through new digital applications built on intricate microservices-based architectures. 

But failing to monitor these applications effectively can have serious implications for both customer experience and corporate performance. We know that 32% of customers are unforgiving of a poor digital experience, and more than half (53%) of people will leave a mobile page if it takes longer than three seconds to load. So how can companies maintain visibility into these new network architectures to ensure that everything runs smoothly?

Firstly, let’s examine how these networks are built. The two- or three-tiered applications that held sway for so long are rapidly being replaced with microservices-based architectures running on-premises, in the cloud and on partner clouds, delivered through web and mobile interfaces, simultaneously. For example, Amazon moved from a two-tier monolith to a fully-distributed, decentralized, services platform where 100-150 services are accessed to build a page.

Many of today’s microservices, moreover, use third-party software and open-source libraries over which you have little control. These digital applications can provide huge benefits in terms of flexibility and agility – allowing new functionality to be developed, tested and deployed much faster than before – but they can also have serious consequences in terms of network monitoring and security. 

The sheer volume of component services involved makes it much more difficult to monitor what the user is experiencing, or to see if any glitches are occurring. As a result, companies could experience thousands of customers complaining that they cannot logon to a service, while the IT team are scrambling around, unable to locate where the problem is occurring. It’s easy to see how situations like this can affect a company’s bottom line. 

From a security perspective, such complexity brings additional complications. Firstly, because application data can flow across many servers running across different infrastructures, the surface area for potential attacks is exponentially larger.

In addition, the need for application components to communicate with third-party systems increases the potential exposure to outside threats. This has caused security headaches for companies the world over.

For example, in 2018 it was revealed that some of Tesla’s Amazon Web Services (AWS) cloud infrastructure was running cryptocurrency mining malware, after being hit with a sophisticated cryptojacking operation – because of an open server that wasn’t password protected.

Compliance issues can come into play as well: if you don’t know that an obscure database run by a third-party microservice is leaking customer data, companies could be putting their whole business at risk for heavy fines, or worse. The truth is that no one developer can truly understand all communication patterns between the different microservices. So how can companies monitor these complex new network architectures and support business goals while maintaining security and a good user experience? 

To address these problems, it’s vital to turn to the network and its handling of traffic between microservices. Network traffic is a fantastic source of telemetry that can be tied back to the application, or microservice. It’s a uniquely effective way to map out communication patterns and profile them.

Using this intelligence, it’s possible to fingerprint application activity. To be useful, however, you need to capture all application traffic on the network, including all traffic between the many component microservices. Armed with this information, you can then identify applications and their microservices, extract information and metadata, and then distribute that information to monitoring and security tools.

Successful digital transformation journeys begin with redefining corporate capabilities and culture but are ultimately realised through complex microservices-based applications. Without the right monitoring tools however, a series of poor customer experiences or security incidents can bring down these complex architectures like a pack of cards.

Networks are critical to the delivery of great customer experiences and high application performance. Ultimately, it’s only through growing their network capabilities to ensure complete visibility and security that companies can achieve their future goals, securely. 

What’s hot on Infosecurity Magazine?