The rise of virtualized and cloud-native networks is revolutionizing today’s networking spaces. Rigid architectures are giving way to network instances that can be spun up in seconds and put to task in no time, leveraging generic machines that allow continuous repurposing of the underlying computing, storage and networking resources.
At the core of these advancements is the relentless push among developers for traffic management technologies that can address the aggressive growth in traffic volumes and applications. Among these is the adoption of vector packet processing (VPP), a packet processing stack that is designed to improve network speed and latency and save power. VPP is an open-source framework, backed by the Linux Foundation and is part of the Fast Data Project (FD.io). It utilizes e.g. XDP (eXpress Data Path) or DPDK (Data Plane Development Kit device) packet input drivers and allows flexible addition of new processing nodes to the processing graph.
How does VPP work?
The VPP framework is built around the concept of a packet processing graph. Each node within a graph has a certain packet processing function such as information extraction (lookup), information tagging or information altering such as changing an MAC, IP or Port for forwarding purposes. VPP processes packets in batches known as vectors/frames, up to 256 packets at a time.1 Each frame/vector is processed using a directed graph of nodes, with all packets in the vector fully processed before it moves on to the next graph node. This allows VPP to retain the instructions fetched from the network function’s main memory (CPU) in the memory cache of each node, resulting in a significant reduction of instructions communicated across a network function and the time taken to completely process an entire batch.
Batch processing and cached instructions set VPP apart from its predecessor scalar packet processing by delivering higher processing speeds and lower latencies. VPP greatly enhances traffic processing efficiencies for software-based packet processing as in the case of 5G-UPFs (user plan functions), cloud-native network functions (CNFs) and virtual network functions (VNFs).
Popular use cases
VPP is already prevalent across various networking use cases. Cisco uses VPP in a wide range of its networking products including its VPP software switches and its Virtualized Infrastructure Manager (VIM). Others include container network interface (CNI) solutions, Ligato and network service mesh (NSM).2 For CNI solutions, some of the plugins for configuring network interfaces in Linux containers use VPP technology. Ligato, a framework to build applications to manage CNFs, leverages a VPP agent to manage the VPP data plane that provides plugins for programming different network features. NSM’s data plane, which provides forwarding mechanisms for the implementation of connections between containers in a Network Service Mesh, uses VPP. VPP is also deployed widely across security solutions such as firewalls. Examples include Pantheon’s cloud-native and unified firewall3 and Netgate’s TNSR4 which combines firewall, routing and VPN functionalities.
Traffic visibility, specifically application awareness, plays a key role in VPP. It forms the basis for protocol or application-aware vectoring – that means, the creation of vectors where packets of similar attributes are batched together. It also enables a more efficient load balancing of packets within a forwarding graph.
This visibility is delivered by traffic inspection tools such as the new R&S®vPACE (Vectoring Protocol Application Classification Engine), a vectoring-enabled DPI engine capable of classifying and detecting applications and protocols in real time in high speeds. R&S®vPACE is developed specifically for VPP, and caters to an array of vector-based packet processing frameworks including the VPP FD.io.
R&S®vPACE can be embedded within a virtualized network function to identify packets by the underlying application, protocol or service. Using advanced methods such as statistical, behavioral and heuristics analysis and machine learning, R&S®vPACE replaces shallow inspection techniques typically used to support VPP to deliver deeper insights into the traffic navigating the VPP forwarding graph.
The DPI classification result stack generated by R&S®vPACE has been designed to conform the limitations of vnet-layer opaque data provided by the VPP framework. This means that the user can utilize the VPP packet vector buffer directly to retrieve, store and convey DPI classification results to subsequent graph nodes. This information is highly important in optimizing vectors, aligning VPP graphs to the current traffic needs and improving network security, thus paving the way for high-performant network functions across both virtualized and cloud-native environments.
For faster delivery of applications
Vector processing can compute up to 256 packets at any given time. Random classification of packets into a vector results in incoming traffic flows being split into different cycles of processing, which may affect the speed at which an application or service is delivered. Real-time traffic classification by R&S®vPACE leveraging its frequently updated signature library, delivers accurate identification of each packet. This allows IP packets from a single session, application or user to be aggregated in the same vector, in addition to enabling vectors of a single session or application to be processed sequentially. This enables certain applications to be processed and delivered faster.
For faster shaping of graphs and node navigation pathways
Information provided by R&S®vPACE is also crucial in determining the traffic processing and routing routines within the VPP forwarding graph. Identification of the underlying attributes enables the graph node dispatcher to connect each packet to existing nodes or newly added plugins allocating routing and processing functions specific to the applications being delivered. This improves the packet forwarding mechanism within a VPP forwarding graph. For example, where a header needs to be added, only vectors with packets requiring an additional header are routed through the relevant nodes. Derived actions, such as QoS rules or filtering, can be based on the classification results of the DPI-analyzed vectors.
For maximizing the use of cached memory
The whole idea of VPP focuses on the benefits derived from not having to scale computing and networking resources linearly, which is typical in traditional packet processing. With VPP, instruction cache information can be used for all packets in the vector as they share a common processing goal. This significantly reduces CPU cost for the remaining frames in that vector.
VPP graphs comprise instances that are instituted dynamically to support various packet processing tasks within VNFs and CNFs. VPP graphs may require continuous realigning as these functions adapt to the underlying applications being processed. Therefore, a VPP graph involves the formation of new nodes or the rewiring of existing ones to match against the needs of the traffic. By filtering the incoming packets against known signatures, R&S®vPACE identifies these applications and assists VPP by providing deep visibility into the traffic flows. This supports the automated instantiation of new nodes and the removal of redundant ones, improving the responsiveness of the VPP framework in handling the many applications processed every second.
R&S®vPACE, when deployed for security functions such as cloud-native firewalls in cloud-native architectures, or virtualized firewalls deployed on virtual machines, enables network managers to identify threats hidden in a vector. With embedded application awareness, these functions benefit from the highly accurate detection of suspicious and anomalous traffic.
R&S®vPACE also enhances VPP with its ability to tap into cutting-edge traffic inspection techniques, leveraging advanced methods such as machine learning and deep learning to detect applications that are encrypted, anonymized and obfuscated. This retains traffic visibility even as more of these applications navigate the network, ensuring the use of accurate vector and VPP forwarding policies.
While the move to cloud-native networking and virtualization speeds up today’s networks and improves their efficiencies, processing enhancements at the packet level enabled by VPP are expected to significantly augment the performance of each 5G-UPF, VNF and CNF, leading to further advancements in how IP networks are built and managed. Combined with real-time traffic visibility, VPP will see the creation of networks that are not only scalable, responsive and agile, but also guided by packet and application-level awareness that brings the intelligence with which future networks are to be built.