Networks are essentially a collection of network functions. While most network functions we know today started off as proprietary hardware, cost and inter-vendor compatibility issues emerged as networks grew more complex and complicated. This inadvertently led to the virtualization of network functions, where functions such as switches, routers and firewalls are run as software or virtualized network functions (VNFs) on industry-standard virtual machines for increased network agility and scalability.
As the adoption of Cloud computing becomes increasingly prevalent, cloud-native IT architectures now see the same virtualized functions repackaged into what is known as microservices. The monolithic code that makes up a firewall, for example, is now broken down into simplified code stacks or ‘containers’ and orchestrated into a Cloud Network Function (CNF) using APIs.
Our DPI engine R&S®PACE 2 provides traffic filtering, an essential functionality for today’s networks. As an OEM engine, R&S®PACE 2 is built into third-party proprietary hardware or is integrated into third-party VNFs and CNFs, including DPI-VNFs and DPI deployed as CNF. R&S®PACE 2 is a software library written in C without dependencies and embedded into different computing environments to fulfil traffic filtering needs within a particular network function or network node.
Making DPI decisions in a world of VNFs and CNFs
Regardless of whether in a traditional, virtualized or cloud-native architecture, the deployment of a DPI engine such as R&S®PACE 2 is largely a software deployment decision. The decision whether to integrate a proprietary, open-source, or in-house DPI engine has to take a range of key factors into consideration, the primary of which are performance and cost implications.
R&S®PACE 2 enables real-time visibility into IP traffic flows, up to and throughout the application layer of the OSI stack. The DPI engine also provides real-time classification and metadata extraction, and deploys machine learning and statistical analysis to identify traffic and applications instantaneously. Thanks to weekly updates to its library, the engine classifies applications and protocols for unlimited traffic flows, including encrypted traffic, with the highest accuracy. This capability enables R&S®PACE 2 to fulfil the performance requirement of accurate and high-volume data filtering. Along with this, due to its platform-agnostic software and flexible API, R&S®PACE 2 can be integrated and embedded seamlessly into other network functions.
Figuring out the real costs of a DPI engine
The performance of a DPI engine in terms of its processing capacity and accuracy however, is not the only factor determining its deployment. In the world of VNFs and more specifically CNFs, performance of a DPI software has to be weighed against its implication to overall network costs – both direct and indirect. While direct costs can be easily computed from the cost of hiring developmental teams or from vendor quotations on licensing fees and customization costs, indirect costs are typically hidden in the product specifications and its build. Intrigued? Here’s more.
A DPI software, like any other network component, is essentially part of a wider ecosystem that relies on shared computing, memory and storage requirements. Incorporated into other network functions such as routers or firewalls, it is important that a DPI engine is designed to keep its resource consumption optimized. Optimized consumption of resources determines a DPI engine’s efficiency, measured by looking at its performance against the resources required to deliver that performance, in terms of CPU, memory and storage.
Buckling under one’s own weight
A typically overlooked aspect in this regard is a DPI engine’s memory footprint. The higher the memory utilization a software registers, i.e. the larger its memory footprint, the more random access memory (RAM) it consumes. While the published performance metrics of a DPI engine may look promising on its product brochure, its actual performance really depends on the RAM it has access to. In a shared architecture, DPI with a larger memory footprint can be easily impaired by insufficient memory due to temporary RAM depletion, while DPI engines with a smaller memory footprint perform steadily.
Too many tenants, none too happy
In addition to its own performance impairment, a DPI engine with a larger memory footprint could leave less RAM for every other function or application with which it shares resources. A small memory footprint can therefore lead to hardware savings and possibly a better performance. At the network level, as a lean add-on to existing, resource-limited hardware, a small memory footprint component reduces the chances of memory limits being exceeded, avoiding device failures and, consequently, network slowdowns.
So, how does our own DPI engine R&S®PACE 2 fare in terms of memory footprint? At around 400 bytes per network flow (5-tuple), we can proudly say that the memory footprint of R&S®PACE 2 is the smallest in the market. This makes it a lightweight component when embedded into network functions such as switches, routers and access points, many of which run on small memory footprint. Additionally, as a software component, R&S®PACE 2 supports virtualization, being free of hardware constraints, and offering a high degree of scalability and substantial cost savings.
Ultimately, having a small memory footprint is a vital facet to a top-performing DPI. R&S®PACE 2 excels in this regard, making it a great complement for various equipment in the realms of network security, analytics and management. Whether network functions in such realms are being run on virtual machines or containerized, a DPI engine that uses less memory keeps vital resources open for the entire workflow, and augments network performance comprehensively.
Download our R&S®PACE 2 solution guide to find out more about the best-performing DPI engine on the market.