Re: [odp4vpp] project proposal


Edward Warnicke
 

Francois,

Thank you for the thought you have given this :)  It can sometimes be challenging to tease these things out cleanly, and your patience in putting in the time to get it right is deeply appreciated.

Is the odp4vpp project proposal in a shape you are happy with (I do note you've updated it)?
If so, would you like to schedule the odp4vpp project review for this coming Thursday?

Ed

On Mon, Jan 16, 2017 at 3:51 AM, Francois Ozog <francois.ozog@...> wrote:
Hi,

After giving some more thoughts on the previous mail, I now see that II and III are not ODP specific.

As a matter of fact, having VPP on DPDK running autonomously in a smartNIC may require communication with the host over PCI at least for Honeycomb agent.
Same applies for inline acceleration handling, this is also applicable to VPP over DPDK.

Based on this new point of view, I would like to limit the scope of [odp4vpp] project to the first paragraph. In this case, Ed's scope proposal is perfect.

As a consequence, I will also propose the creation of other projects (names to be found):
- smartNIC: VPP and Honeycomb running on the same host through a hardware supported message queue. Creating this project is needed rapidly as it is mandatory to support Kalray's VPP.
- inline accelerators: VPP graph introspection API to manage offloading parts of the graph on hardware. Creating this project can be deferred after more discussions with VPP community.

Cordially,

FF



On 9 January 2017 at 15:13, Francois Ozog <francois.ozog@...> wrote:
Hi Ed,

many thanks for the proposal. It looks a way simpler and clearer than my first cut. 

I would like to take the opportunity to list the expected deliverables and their associated "scope" so that we may find the best wording (if it is just a matter of wording of course).

I - native ODP hardware access: scope is plugin only.

A key abstract "object" in ODP is an odp_packet_t and silicon vendors maps it to the hardware packet descriptor. Linaro reference implementation define an odp_packet much like DPDK defines an rte_mbuf. When writing the odp_input for VPP, we want to avoid an intermediate metadata: there will be vpp buffer and the hardware buffer. We expect to bring some performance gains for some x86 NICs such as Chelsio and Netcope. Those NICs coalesce multiple packets per DMA transaction to allow full line rate at 40Gbps and higher connectivity.

Form factor can be servers or smartNICs with onboard processors. In this case, VPP and companion processes (honeycomb agent) run in the same environment (be it a server or a smartNIC).

II - native ODP hardware access in special processing environments: scope is plugin and management message queue

Some smartNICs, such as Kalray, have very high core count (248 in the case of Kalray) and a GPU programming model. Each core has private memory that cannot be accessed from another code. ODP can accommodate such running environments and we think that VPP can also run in this environment.

Because of constraints, Honeycomb agent may have to run on the host. The message queue model used to connect the agent and VPP is implemented on a shared memory. Many solutions can be envisioned if agent and vpp are on the same host but cannot directly share memory: from a transparent proxy to a PCI based queue implementation.

So the scope would cover the management message queue of VPP and it may touch honeycomb agent too.

III - inline accelerator handling: scope is plugin and graph management

Inline accelerator is opposed to look aside acceleration. For example: in the IPsec inline acceleration, VPP would receive fully decrypted packets; for IPsec look-aside acceleration, VPP would receive encrypted packet, send it to accelerator and receive the decrypted version.

In this case, we expect the odp_input node to have two output nodes: ether_input for non IPsec traffic and ip_input for IPsec traffic.

We also expect to be able to conduct graph introspection to see if other nodes are present:
- if there is MPLS and the hardware supports IPsec over MPLS then we should still be able to implement acceleration
- if there are custom nodes, then we should NOT implement the acceleration
- if nodes are added or removed (need graph change event notifiers) then we need to reevaluate acceleration implementation

(As a side note: we would very welcome implementation Intel 82599 IPsec inline offloading in ODP too. As far as I understand, 82599 is compliant to Microsoft IPsec v2 and ODP IPsec API is not far from that. Quick analysis of the 82599 datasheet confirms this compatibility but we haven't found an  open source implementation of 82599 IPsec offload that we could leverage)


Lastly, I am not available on Thursday 12th as FD.io TSC and Linaro Networking Steering Committees are happening at the same time. So I would be happy to present the new version of odp4vpp the next session.

Cordially,

FF

On 5 January 2017 at 18:38, Ed Warnicke <hagbard@...> wrote:
Francois,

I wanted to kick off the discussion on cleaning up the small clarity issues so we can close on your project creation next week.


<tobeconfirmed>@kalray.com

while it would be awesome to have someone from kalray as an initial committer, we do need an actual person there :)

2)  I think there was some confusion wrt your scope:


Which I reproduce here to ease conversation:

***************************

1) VPP in SmartNICs

In this case, the scope of the work is centered on packet/IO in the SmartNIC hardware which exposes devices directly to consumers (PCI VF to a VM for instance or container netdev)

2) VPP in the host + accelerators or reconfigurable hardware

In this case, the scope of work encompasses:

  • Network IO integration with VPP
  • Mitigation of configuration from graph nodes and underlying hardware


Underlying hardware may include fixed function acceleration (crypto look aside, IPsec inline or look aside, compression, TCP termination…), programmable hardware (P4, SmartNIC, flow processors) or reconfigurable hardware (FPGA). Delegation of execution of parts of the VPP graph on the hardware may require addition of VPP APIs to exchange graph topology and or configuration with the networking layer. At this stage, architectural studies are not yet complete. Fixed function acceleration may not need those APIs.

***************************

I think there was a lot of confusion as to what the scope entailed.  Would it still capture your intended scope of work if we 'inverted' the scope to something more like:

***************************
To produce plugin(s) to vpp to enable vpp to take advantage hardware acceleration via ODP.

These plugin(s) may provide additional graph nodes, rewire the VPP graph to take advantage of those graph nodes, etc via the normal vpp plugin mechanisms.
***************************

Please note, I intentionally kept the above short and simple as a *starting point* for conversation, do not hesitate to suggest I have cut out some crucial detail, or otherwise changed the meaning of your scope.  The goal here is to capture your intent with fidelity while improving clarity to the reader :)

Ed

On Mon, Jan 2, 2017 at 3:48 PM, Joel Halpern <joel.halpern@...> wrote:

As a minor point (not an obstacle to approval), as I understand the process, the <tobeconfirmed> commiter  name will either need to be replaced with a person, or removed.

 

Yours,

Joel

 

From: tsc-bounces@... [mailto:tsc-bounces@...o] On Behalf Of Francois Ozog
Sent: Monday, January 02, 2017 5:42 PM
To: Ed Warnicke <hagbard@...>
Cc: tsc@...; Ed Warnicke (eaw) <eaw@...>
Subject: Re: [tsc] [odp4vpp] project proposal

 

Hi

 

This Thursday would be perfect 

 

Cordially 

 

FF

 

Le lun. 2 janv. 2017 à 23:40, Ed Warnicke <hagbard@...> a écrit :

Francois,

 

Would you prefer Thu Jan 5, 2017 (this Thu) or Thu Jan 12, 2017 (next Thu)?

 

Ed

On Fri, Dec 16, 2016 at 10:09 AM, Francois Ozog <francois.ozog@...> wrote:

Dear members of TSC,





it is my pleasure to present this sub-project proposition:





https://wiki.fd.io/view/Project_Proposals/odp4vpp





I would like to propose a formal review around second week of January


2017 depending on TSC meeting calendar.





Cordially,





François-Frédéric







--


François-Frédéric Ozog | Director Linaro Networking Group



_______________________________________________


tsc mailing list


tsc@...


https://lists.fd.io/mailman/listinfo/tsc





--
Linaro
François-Frédéric Ozog | Director Linaro Networking Group
T: +33.67221.6485
francois.ozog@... | Skype: ffozog




--
Linaro
François-Frédéric Ozog | Director Linaro Networking Group
T: +33.67221.6485
francois.ozog@... | Skype: ffozog


Join tsc@lists.fd.io to automatically receive all group messages.