Reinstating cloud visibility with deep packet inspection

Magnus Bartsch portrait

By Magnus Bartsch
Published on: 04.04.2022

AWS, Microsoft, Google, Alibaba, Oracle, IBM and Tencent — no discussion on the cloud has taken place without referencing these and other players of scale in the cloud market. The pandemic obviously intensified the race to the cloud, and it has worked to the benefit of all parties —enterprises, cloud providers and end users accessing applications and services seamlessly from the comfort of wherever they want to be.

Moving an enterprise’s workload to an ‘as-a-service’ model, whether it is ‘infrastructure-as-a-service’, ‘platform-as-a-service’ or ‘software-as-a-service’ marks a significant change in how an enterprise creates, manages and secures its workloads. While the cloud is touted for simplifying these processes and allowing enterprises to focus on their core business, it presents a number of disruptions in the control an enterprise has over its traffic.

In legacy deployments, which centered on infrastructure on site, network management — from the underlying hardware to the last piece of code — was contained in enterprise-controlled workspaces. Network functions such IP probes, CDNs, DNS, tethering, SDN controllers and load balancers, and a wide range of security functions such as intrusion prevention system (IPS), firewalls and advanced threat protection (ATP) are deployed and managed directly by the enterprise. The enterprise thus has free reign on how these functions are configured, deployed and scaled. This allowed enterprises to implement multitier traffic management, security and monitoring policies based on different types of applications, users, network conditions and risk profiles.

Challenges of the hybrid cloud

The move to the cloud might seem like the simple migration of existing setups to a space or a platform owned and managed by a third party, but marks a number of key transitions. First, existing traffic management, security and monitoring policies and tools used to carry them out might no longer be applied consistently to all company traffic. This is due to the fact that cloud traffic is managed within its own ecosystem. It is inherently unviable for every packet in an enterprise cloud to be backhauled to the enterprise data center just for existing policies and tools to be applied across the board. Second, visibility poses a challenge. While cloud providers typically bundle comprehensive insights on infrastructure and network performance, there is limited visibility into the underlying traffic flows moving in and out of any company cloud, specifically regarding the types of applications and services being handled by each of the ‘as-a-service’ offerings. Without deep visibility into the underlying traffic, it becomes impossible for enterprises to extend their existing network policies to their cloud traffic, resulting in very disparate frameworks within a single network. This is exacerbated further in a multi-cloud architecture, where such policies and tools become even more fragmented.

With more enterprises embracing the cloud, these gaps have become a key concern among many and this has led to a number of developments in the cloud space, for example, traffic mirroring capabilities. This allows parts of or the whole traffic from the cloud to be replicated and channeled to an external system. Another development is the emergence of data and analytical intermediaries who leverage traffic mirroring capabilities used by cloud providers to aggregate traffic from multiple clouds and channel these flows to an enterprise’s tools and systems based on their requirements. Third, is the use of advanced traffic inspection tools such as deep packet inspection (DPI) to identify and classify cloud traffic flows based on applications and services including performance attributes such as latency or speeds or types of application which allows for intelligent and automated selection of traffic flows for onward forwarding.

Extending existing policies to the cloud

As mentioned, extending existing traffic management, security and monitoring policies to an enterprise’s cloud traffic requires, above all, the identification and classification of the underlying applications. Every company handles its own set of applications which are typically defined by different levels of priority, criticality, confidentiality, authentication rules, security policies and performance SLAs. DPI engines such as ipoque's R&S®PACE 2 provide highly reliable, accurate classification of applications, protocols and services. As an OEM software solution, R&S®PACE 2 can be used to power the analytics intermediary in charge of selecting and forwarding the most relevant flows to selected network tools. By providing real-time granular analysis of the underlying traffic flows, such intermediaries are able to seamlessly and efficiently enable cloud traffic to be imparted with the same traffic management, security and monitoring policies as traffic that is hosted on site.

Monitoring IP flows, for example, relies on application awareness. Network tools such as routers and switches which are embedded with flow monitoring methods are typically constrained in terms of bandwidth and CPU to scale processing to all flows within the network. Application-aware sampling provided by R&S®PACE 2 allows an analytics intermediary to select and export flows based on prespecified sampling rules, increasing processing efficiencies without sacrificing crucial insights into the network. This includes samples by different application types, samples judged by the most frequently accessed applications and samples by different users. The intermediary can also extend this information by building samples based on different services per application. For example, for a company’s virtual conference, this could be an application that features services for video, audio and chat.

In the case of application performance management (APM), inputs provided by R&S®PACE 2 enable traffic flows to be categorized and segmented by critical and noncritical applications. For example, an enterprise CRM application or its e-commerce website, which generates the bulk of its revenue, will see automated forwarding of virtually all related flows to the company’s APM tool hosted and managed on-site. This allows an enterprise to be continuously informed of even the slightest performance deterioration. With complete visibility into all of its sessions, APM is able to produce granular performance analysis categorized by different web actions such as account creation, account login, credential change, cart retrieval and payment. For distributed applications hosted and delivered from different clouds, the analysis provided by R&S®PACE 2 ensures that no gaps are left unattended in managing and delivering critical applications.

Similarly, for network security, tools such as DLP, next-gen firewalls or IDS require insights delivered by R&S®PACE 2 on flows that are suspicious, malicious or anomalous. By identifying these flows in real time, forwarding intermediaries are able to single out the affected packets and bring them to the attention of these tools for immediate filtering. This has huge cost implications to an enterprise’s hybrid cloud security strategy. It allows existing security policies to be applied to its workloads in the cloud without the costs associated with blanket filtering of all cloud traffic. In fact, R&S®PACE 2 enables further optimization of cloud security management by coupling inspection of traffic anomalies with application awareness which allows traffic from sensitive sites such as banking and healthcare to be forwarded through additional layers of filtering.

The benefits of the cloud are evident from the many companies switching. To gain from these benefits while retaining control of their networks and assets, enterprises will need solutions that are able to extend existing network capabilities to the new cloud environment. Solution providers looking to fill this gap can rely on R&S®PACE 2 for its unparalleled capabilities in delivering real-time traffic awareness for every network occasion.

Magnus Bartsch portrait

Magnus Bartsch

Contact me on LinkedIn

Magnus has always had a keen interest in computer science. From the start, he has had a particular fascination for deep packet inspection and the broader technologies utilizing this powerful software. Based on this interest, Magnus joined ipoque, a market leader in the DPI field.
During his 13 years at ipoque, he has worked in development, pre-sales and consulting. Throughout this time, he has not only been able to motivate, coach and advise people from around the globe, but also to expand his personal experience by providing full-stack support from rapid prototyping over integration support up to application architecture design. When he is not out promoting ipoque, he has a passion for seeing the world from his motorbike.

Related material

ipoque blog - discover the latest news and trends in IP network analytics

Sign up for our newsletter

Stay informed about the latest news and insights from ipoque