Principle


Definition

Control Plane Policing (CoPP) defines the traffic classification, queue mapping, and queue shaping for control plane packets directed to switch CPU, achieving to protect switch from being overwhelmed by malicious attacks and overload, maintaining data forwarding and network topology stability. CoPP uses a dedicated control plane configuration through the QoS module of CoS (Class of Service) and Firewall Filter Rule. Figure 1 shows the CoPP process.

Figure 1 CoPP Process

 

CoPP process follows four steps:

Classifying: CoPP identifies and classifies the flow of traffic handled by the switch CPU according to packet information of layer 2, layer 3 and layer4 based on firewall filter rules.

Queue mapping: This action is responsible for sending different types of packets to the specified CPU queue. The packets in different queues have different scheduling priorities according to scheduling weight.

Scheduling: When a network is congested intermittently and delay-sensitive services require higher bandwidth than other services, or when there are packets in multiple queues to be transmitted, scheduling is responsible for selecting a queue with a scheduling algorithm and processing the packets from the queue. CoPP uses the Weighted Round Robin (WRR) scheduling algorithm; please refer to 1.1.3 Queue Mapping and Scheduling for details about WRR.

Queue shaping: Set a minimum and maximum bandwidth for each CPU queue in packets per second (PPS) for queue shaping, this queue bandwidth limit ensures that the CPU will not face excessively loaded conditions in any case.

CoPP Traffic Classification

Based on the firewall filter rules, control plane packets directed to switch CPU are checked to see whether they hit a matching field specified in the firewall filter rules. If the packet matches a specified matching field, it is considered a member of a class and maps to a specified CPU queue according to the queue mapping policy.

and is the logical operator between the matching fields with the same sequence number, that is, to be considered to match a firewall filter rule and included in a class, the packets must match all of the matching fields with the same sequence number.

CoPP supports both IPv4 and IPv6 firewall filters, the descriptions of the supported matching fields are as follows:

  • Destination-mac-address/source-mac-address: Filter packets with a specific destination/source MAC address.

  • Destination-address-ipv4/source-address-ipv4: Filter packets with a specific destination/source IPv4 address.

  • Destination-address-ipv6/source-address-ipv6: Filter packets with a specific destination/source IPv6 address.

  • Protocol: Protocol is the Protocol field of the IPv4 header and the Next Header field of the IPv6 header, it could be a protocol name or protocol number identifying the protocol type of packets. Assigned internet protocol numbers 8 for   EGP, 9 for IGP, 47 for GRE, 88 for EIGRP, 103 for PIM, and 112 for VRRP are examples.

  • Destination-port/ Source-port: Filter packets with a specific destination/source port.

  • Vlan: A switch identifies packets from different VLANs by VLAN ID contained in VLAN tags in Ethernet frame. User can set VLAN ID in the firewall filter rule for traffic classification.

  • Ether-type: Ether type is a two-octet field used for indicating which protocol is being transported in an Ethernet frame in the Ethernet networking standard. Table 1 shows the Ether type value of the common protocols.


Table 1 Ether Type value of the common protocols

Protocol Type

Ether Type(Hexadecimal)

Internet Protocol, Version 4 (IPv4)

0x0800

Address Resolution Protocol (ARP)

0x0806

Reverse Address Resolution Protocol (RARP)

0x8035

AppleTalk (Ethertalk)

0x809b

AppleTalk Address Resolution Protocol (AARP)

0x80f3

IEEE 802.1Q-tagged frame

0x8100

Novell IPX (alt)

0x8137

Novell

0x8138

Internet Protocol, Version 6 (IPv6)

0x86DD

Ethernet Slow Protocols

0x8809

Queue Mapping and Scheduling

Queue Mapping

With CoPP traffic classification, packets that match a firewall filter rule will be sent to a specified CPU queue according to the queue mapping policy.

If a packet matches no class for ACL action discard or forward, the match processing is as follows. Packet classification included within queue mapping policy is processed top-down. When a packet is found to match a class, no further match processing is performed. That is, a packet can only belong to a single class, and it is the first one to which a match occurs.

If a packet is found to match a class for ACL action discard or forward, the match processing is as follows:

  •  When a packet is found to match a class for action discard, then the switch will discard the packet and will not match the remaining ACLs.

  •  When a packet is found to match a class for action forward, then the switch will forward the packet according to the CoPP forward class firstly matched and will not match the remaining ACLs.

When a packet directed to switch CPU matches none of the defined firewall filter rule, it is automatically mapped to CPU queue 0.

The mapping relationships between the firewall filter rule and CPU queue are:

  • One or more firewall filter rules can be mapped to one CPU queue (n to one mapping).

  • Each firewall filter rules can be matched to at most one queue.

Scheduling

CoPP uses WRR (Weighted Round Robin) to schedule packets in queues.

In order to ensure every queue has certain servicing time, WRR uses a round robin scheduling algorithm between the queues. When the scheduler mode is WRR, every queue can have a weighted value, which is also known as scheduling weight. Scheduling weight implies that when the egress port schedules the queue messages a proportion of scheduling resources are to be used. Scheduling unit is Kbps. The example of a WRR scheduling algorithm is as follows:

On the 1000 Mbps egress port, the scheduling weights of eight queues are 5, 4, 3, 3, 2, 1, 1, 1; this ensures that even the lowest priority queue gets bandwidth.

The calculation method is as follows: 

1/ (5+4+3+3+2+1+1+1)*1000 Mbps=50 Mbps. 

This can help avoid the problem of packets in the lower priority queues not getting service for longer periods of time. The advantage is that although the queue scheduling uses round robin scheduling, every queue does not distribute at a fixed service time: if a queue is empty, the next queue should be scheduled immediately. In this way, it makes full use of bandwidth resources. When using WRR scheduling algorithm, user can define the weighted value for each queue.

Queue Shaping

Queue shaping adjusts the rate of traffic to switch CPU. It helps to reduce traffic bursts so that packets can be transmitted at a stable rate through to CPU. For queues of different priorities, the device can provide differentiated services using different queue shaping parameter settings of Max-bandwidth-pps and Min-bandwidth-pps.

  • Max-bandwidth-pps: Max-bandwidth-pps is the maximum packet processing rate of a CPU queue in packets per second (PPS). If there is heavy traffic on the interface caused by malicious attacks or network exceptions, the CPU will be overloaded and services would be interrupted. In order to avoid this situation, user can set Maximum bandwidth to the CPU queue for queue shaping.

  • Min-bandwidth-pps: Minimum bandwidth means a guaranteed bandwidth of CPU queue for packets in pps. The total value of min-bandwidth-pps of all activated queues should be less than CPU-affordable bandwidth, which should be the maximum bandwidth threshold to the CPU which is different depending on the platform.

Copyright © 2024 Pica8 Inc. All Rights Reserved.