Configuring Multi-Table

Hardware OpenFlow Multi-table Limitations

OpenFlow 1.1 and later versions have a concept of tables--independent lookup tables that can be chained in any way user wishes. This is a very useful concept to decrease the number of flows by segmenting those flows in multiple tables. 

The implementation of those multiple tables is not difficult in a software based switch like OVS, but it is a substantial issue for a hardware based switch. This is because most ASIC's have a limited set of capacities and the ability to do multiple lookup on packets is severely limited.

The multi-table concept is very useful though to emulate an ASIC Pipeline. It allows the Openflow based solution to leverage a lot more of the Switch ASIC capacities like complex lookup or different types of memory available on the ASIC. A hardware based multi-table implementation must be limited to reflect the limitation of the underlying ASIC.

This means that the number of tables, the conflict between tables, the capacity of those tables and the link between them will be limited by the implementation. This is now defined more generically as a  Table Typed Patterned by the ONF. 

Multi-Tables in TCAM

Traditionally in a hardware based Openflow implementation, the flows are placed in the switch TCAM memory. This is because this memory is perfect for complex matching (can match on many parts of the packet header and the actions possible once the flow is matched are very diverse) and as such is a good match for most Openflow solutions.

In PicOS, by default, all Openflow tables are implemented in TCAM. 

It is possible to create multiple tables, but because only one table is available, the OS is normalizing the flows into only one hardware table (table 0). 


Because the normalizing process cannot simulate all the types of multi table logic, using this TCAM-only Multi-table implementation is typically not recommended. It is mainly used as a proof of concept or demonstration purpose.

One way to be sure that the normalizing process will render the logic of the flows correctly, is to have only one of the tables with actions. All the other tables should only have drop or goto action.

Note:

  • If adding L2/L3 flow entry applied to one port which does not exist in the bridge, the flow cannot add correctly.
  • Command of adding L3 flow modified. User must enable L2 mode first and add a system mac flow (the action is normal in ovs2.3, the actions is goto_l3 in ovs2.6) to the L2 table. If user wants to use the L3 table to do the routing.
  • The route flows are limited to 12000 by default.


Using the Forwarding Database instead of the TCAM

Starting in version 2.4, PicOS supports the FDB (forwarding database) table or ROUTE table like the traditional L2/L3 mode. That is to say the flows can be stored not only in TCAM table but also in FDB or ROUTE table. See the Switch Hardware Architecture for a description of the actual hardware pipeline.

This is very useful when the scaling of the solution is important and this allows the usage of more memories on the switch, as well as access to a more complex lookup.

The FDB tables consist of a MAC table (similar to a typical L2 Switch Mac lookup) and an IP Table (similar to a typical L3 Router IP lookup). User can select to download flows into the TCAM (default), the MAC table, or the IP table.

Every packet will match all those tables. Conflict between tables (different action in different tables) is managed by the table priority which can be configured.

OpenFlow "goto" action is not supported between tables. In this hardware implementation, all tables will be used.




To Map a specific OpenFlow table to the MAC table, use the command:

set-l2-mode TRUE|FALSE [TABLE] command to enable the MAC table to store flows. [TABLE]  is the table number which table user set as the FDB table. By default it is the table 251. The flow in the TCAM table should strictly match dl_dst,dl_vlan, (output port in action of flow).

To Map a specific OpenFlow table to the IP table, use the command:

set-l3-mode TRUE|FALSE [TABLE] command to enable the ROUTE table to store flows. [TABLE] is the table number user set as the ROUTE table. By default, the ROUTE table number is 252. The flows to be stored in ROUTE must strictly match dl_type,nw_dst, (mod_dl_dst,mod_dl_vlan,output port  in action of flow). But user should add a flow with normal action to FDB table first if user wants the L3 flow to work.

By default, TCAM matching has higher priority than L2/L3, and the priority is 0. User can use the command 'ovs-vsctl set-l2-l3-preference true' to have the FIB/MAC table with a higher priority than the TCAM table.

By default, the ROUTE table is higher priority than the MAC table.

It is possible to have a maximum of 3 hardware tables with flows in our current implementation simultaneously: 1 TCAM table, 1 ROUTE table and 1 MAC table.

Examples


FDB table configuration example

Step 1:  Create a new bridge named br0.

admin@PicOS-OVS$ovs-vsctl add-br br0 -- set bridge br0 datapath_type=pica8

Step 2:  Add ports to br0.

admin@PicOS-OVS$ovs-vsctl add-port br0 te-1/1/1 vlan_mode=trunk tag=1  -- set Interface te-1/1/1 type=pica8
admin@PicOS-OVS$ovs-vsctl add-port br0 te-1/1/2 vlan_mode=trunk tag=1  -- set Interface te-1/1/2 type=pica8

Step 3:  Set L2-mode true without table number

admin@PicOS-OVS$ovs-vsctl set-l2-mode true

Step 4:  Add a flow with table = 251

admin@PicOS-OVS$ovs-ofctl add-flow br0 table=251,dl_vlan=10,dl_dst=22:22:22:22:22:22,actions=output:2 

Check the flows in hardware using command ovs-appctl pica/dump-flows. User will see that the flow is stored in L2 table. If user wants table 2 to be the FDB table, use the ovs-vsctl set-l2-mode true 2 command.

ROUTE table configuration  example

Step 1:  Create a new bridge named br0.

admin@PicOS-OVS$ovs-vsctl add-br br0 -- set bridge br0 datapath_type=pica8

Step 2:  Add ports to br0.

admin@PicOS-OVS$ovs-vsctl add-port br0 te-1/1/1 vlan_mode=trunk tag=1  -- set Interface te-1/1/1 type=pica8
admin@PicOS-OVS$ovs-vsctl add-port br0 te-1/1/2 vlan_mode=trunk tag=1  -- set Interface te-1/1/2 type=pica8

Step 3:  Set L3-mode true without table number

admin@PicOS-OVS$ovs-vsctl set-l3-mode true

Step 4:  Add a flow with table = 252

admin@PicOS-OVS$ovs-ofctl add-flow br0 table=251,dl_dst=22:22:22:22:22:22,dl_vlan=10,actions=normal
admin@PicOS-OVS$ovs-ofctl add-flow br0 table=252,dl_type=0x0800,nw_dst=192.168.2.30,actions=set_field:44:44:44:22:22:22-\>dl_dst,set_field:40-\>vlan_vid,output:2

Check the flows in hardware using command ovs-appctl pica/dump-flows. User will see that the flow is stored in L3 table. If user wants table 4 to be the FDB table, use the ovs-vsctl set-l3-mode true 4 command.

Egress Flow Table

The match in the egress openflow table is very similar to the ingress tcam table. Only some ipv6 fields cannot be matched.
The full match list is in_port, src_mac, dst_mac, ether_type, vlan_id, vlan_priority, ip_protocol, ipv4_src_addr, ipv4_dst_addr, ipv4_tos, tcpudp_src_port, tcpudp_dst_port.

The size of egress table is 512 flows for most platforms (one exception is the P-3290 limited to only 256 flows).

For a full description of the command usage to configure the egress TCAM, see: Egress-mode Command


Copyright © 2024 Pica8 Inc. All Rights Reserved.