In [Modular design #001] article we described RARE/freeRtr main components:
- a control plane
- a data plane which can have different "flavor"
- and a message API interface that is tying the 2 components above
In most of our past articles, all the components listed above were running on the same host as it is mostly the case on traditional monolithic hardware. In this article we demonstrate that RARE/freeRtr inherently exposes a totally disaggregated model. Practically, what does this means ? We will show in subsequent section that each of the components above can run on different hosts.
While the control plane and the message API interface can be run from any host such as bare metal server, VM or even container, the dataplane of your choice will have to run on specific platform. For example, if you plan to implement a core backbone MPLS router able to switch 6.4 Tbps of packet throughput, the dataplane must be a specific hardware. (e.g. powered by INTEL TOFINO switch ASIC)
In this example we will consider 2 routers named DAV0001 and DAV0101 respectively. Both if these routers are establishing an OSPF adjacency between them.
- DAV0001 is a P4 switch powered INTEL TOFINO ASIC
- DAV0101 is a SOHO RARE/freeRtr DELL VEP1445
Let's assume the logical figure below:
[ #002 ] - Modular design
In reality, we will run DAV0001 components in a different host respectively. We assume of course that there is connectivity between each of them.
Let's first start the INTEL TOFINO dataplane.
From that point the dataplane has activated its P4 CPU_PORT which is in our case ens1 Linux interface that appeared when you activate bf_kpkt modules.
Now the idea is to stitch ens1 CPU_PORT to freeRtr dataplane port and make sure that control plane / dataplane in band communication can occur. The secret sauce main ingredient is pcapInt tool. In essence pcapInt is a freeRtr tool that will allow you to bind all the packet from an interface over UDP socket tunnel.
So, the line above can be read as: tunnel all packet from ens1 interface over UDP tunnel whose source is 184.108.40.206:20666 (bf_switchd@p4 switch) and destination is 220.127.116.11:20666 (freeRtr@server)
DAV0001 hardware configuration file:
DAV0001 software configuration file:
Now it is the time to establish the interface between the control plane and the dataplane
basically you can read this command as follow:
- bf_forwarder.py is running 18.104.22.168
- it is binding freeRtr control plane running at 22.214.171.124
- and P4 dataplane running at 126.96.36.199
The output below is showing you a successful binding operation and the entries creation during control plane and dataplane communication.
Once bf_forwader.py established the connection with the control plane and the dataplane, freeRtr should expose "session UP" message on console
Check that you are sending and receiving packet from freeRtr@eth0 and P4_switch@CPU_PORT:
LLDP is your friend
In the above configuration we have enabled OSPFv4 adjacency via sdn7 with DAV0101@sdn6
The important thing to note is that we are pinging from a control plane that is running on a different host as the dataplane. Nothing special here except that this seemed to be totally invisible from the operator point of view !
This modular design and its property to run as fully disaggregated model is powerful as:
- each component are inter-changeable, connecting freeRtr to another dataplane is a seamless operation
- it can be the foundation of a decentralized architecture where the control plane is not attached to the dataplane
- Resiliency has of course to be thouroughly strenghtened
In this 2nd article you:
- had a showcase on how to implement a fully disaggregated RARE/freeRtr
- even if the control plane and the interface can be run almost anywhere, the dataplane still needs to be specific and adapted to the use case you planned to deploy
- pcapInt tool is a nitty gritty tool used to bind existing ports to a UDP socket tunnel.