[vpp-dev] Five possible issues within nsh-vxlan-gpe code, please confirm them.


Keith Burns <alagalah@...>
 

Adding nsh_sfc-dev ... answers inline..

On Mon, Apr 4, 2016 at 9:13 PM, Ni, Hongjun <hongjun.ni@...> wrote:

Hi all,

 

When I utilized nsh-vxlan-gpe node within VPP,  I found five possible issues.  Please confirm them.

 

If they are confirmed, I will submit patches to VPP for review.

 

(1). When creating tunnel in vnet_nsh_vxlan_gpe_add_del_tunnel:

         key.src = a->src.as_u32;

 

         I think it shoulde be:

         key.src = a->dst.as_u32; /* decap src in key is encap dst in config */


I don't think this will work. We are explicitly creating tunnels. If we are 10.1.1.1, we want to know who is trying to send us stuff, not just randomly accept any old packet that's got a tunnel encap.

For this we need to say "src: 20.1.1.1 dst: 10.1.1.1" where the dst is us (local). We may have multiple tunnels coming in. 
src: 20.1.1.1 dst: 10.1.1.1
src: 30.1.1.1 dst: 10.1.1.1

When we send, we also may want to specify which of our potentially many sources we will be using.

src: 10.1.1.1 (local) dst: 50.1.1.1
etc.
 

 

(2). When creating tunnel in  vnet_nsh_vxlan_gpe_add_del_tunnel,

        Lack key.pad = 0;

        Otherwise, the VPP dataplane will fail to look up tunnels.


Look up does occur correctly as far as I have seen, including not letting you input duplicate tunnels. Can you describe the scenario where this occurs? It may be there is a bug, in which case please submit a patch, but I'd like to understand more about the issue you are seeing. 

 

(3). nsh_vxlan_gpe_input node lacks rx statistics.


Definitely please submit a patch. 

 

(4). nsh_vxlan_gpe_encap lacks tx statistics.


Definitely please submit a patch.
 

 

(5).Within create_nsh_vxlan_gpe_tunnel_command,

VLIB_CLI_COMMAND (create_nsh_vxlan_gpe_tunnel_command, static) = {

  .path = "nsh vxlan tunnel",

  .short_help =

  "nsh vxlan tunnel src <ip4-addr> dst <ip4-addr>"

  "    c1 <nn> c2 <nn> c3 <nn> c4 <nn> spi <nn> si <nn> vni <nn>\n"

  "    [encap-fib-id <nn>] [decap-fib-id <nn>] [o-bit <1|0>] [c-bit <1|0>]\n"

  "    [md-type <nn>][next-ip4][next-ip6][next-ethernet][next-nsh]\n"

  "    [tlv <xx>][decap-next [ip4|ip6|ethernet|nsh-encap|nsh-proxy]][del]\n",

  .function = nsh_vxlan_gpe_add_del_tunnel_command_fn,

 

I think

[encap-fib-id <nn>] should be [encap-vrf-id <nn>]

[decap-fib-id <nn>] should be [decap-vrf-id <nn>]


Can you please explain more? A VRF is a very explicit implementation detail. That is to say, VRF = Virtual Routing and Forwarding. The "Routing" part is RIB (or Routing Information Base) and the Forwarding part is FIB (or Forwarding Information Base). 

I know Damjan is looking into adding RIB to VNET as its quite useful. For now though, I think FIB is still appropriate.

 

 

Best Regards,

Hongjun


_______________________________________________
vpp-dev mailing list
vpp-dev@...
https://lists.fd.io/mailman/listinfo/vpp-dev


Ni, Hongjun
 

Hi Keith,

 

Thank you for your reply. Please see comments inline, beginning with [Hongjun]

 

 

From: Keith Burns [mailto:alagalah@...]
Sent: Tuesday, April 5, 2016 9:45 PM
To: Ni, Hongjun <hongjun.ni@...>
Cc: vpp-dev@...; nsh_sfc-dev@...
Subject: Re: [vpp-dev] Five possible issues within nsh-vxlan-gpe code, please confirm them.

 

Adding nsh_sfc-dev ... answers inline..

 

On Mon, Apr 4, 2016 at 9:13 PM, Ni, Hongjun <hongjun.ni@...> wrote:

Hi all,

 

When I utilized nsh-vxlan-gpe node within VPP,  I found five possible issues.  Please confirm them.

 

If they are confirmed, I will submit patches to VPP for review.

 

(1). When creating tunnel in vnet_nsh_vxlan_gpe_add_del_tunnel:

         key.src = a->src.as_u32;

 

         I think it shoulde be:

         key.src = a->dst.as_u32; /* decap src in key is encap dst in config */

 

I don't think this will work. We are explicitly creating tunnels. If we are 10.1.1.1, we want to know who is trying to send us stuff, not just randomly accept any old packet that's got a tunnel encap.

 

For this we need to say "src: 20.1.1.1 dst: 10.1.1.1" where the dst is us (local). We may have multiple tunnels coming in. 

src: 20.1.1.1 dst: 10.1.1.1

src: 30.1.1.1 dst: 10.1.1.1

 

When we send, we also may want to specify which of our potentially many sources we will be using.

 

src: 10.1.1.1 (local) dst: 50.1.1.1

etc.

 

 

[Hongjun] For tunnel creation, my understanding is that the src ip is the local ip, dst ip is the remote ip.

Take your example, I think the following logic is reasonable.

Src: 10.1.1.1  dst:20.1.1.1

Src: 10.1.1.1  dst: 30.1.1.1

The code change is based on above logic.

 

Actually, the vxlan tunnel creation is based on the same logic. Please refer to vnet_vxlan_add_del_tunnel():

  key.src = a->dst.as_u32; /* decap src in key is encap dst in config */

  key.vni = clib_host_to_net_u32 (a->vni << 8);

 

I think all tunnels creation should be based on the same logic. Otherwise, the users will be confused.

Actually, I has been confused and ran into issues when I created both vxlan tunnel and nsh-vxlan-gpe tunnel at one time.

 

(2). When creating tunnel in  vnet_nsh_vxlan_gpe_add_del_tunnel,

        Lack key.pad = 0;

        Otherwise, the VPP dataplane will fail to look up tunnels.

 

Look up does occur correctly as far as I have seen, including not letting you input duplicate tunnels. Can you describe the scenario where this occurs? It may be there is a bug, in which case please submit a patch, but I'd like to understand more about the issue you are seeing. 

[Hongjun] I ran into this issue: the VPP dataplane could not find the expected tunnel. So I debugged this issue via gdb.

I found that within vnet_nsh_vxlan_gpe_add_del_tunnel, the key.pad is a random number, not 0.

But within nsh_vxlan_gpe_input, the key.pad is equal to 0, which caused looking up tunnel failed.

 

(3). nsh_vxlan_gpe_input node lacks rx statistics.

 

Definitely please submit a patch. 

[Hongjun] OK.  I will submit a patch.

 

(4). nsh_vxlan_gpe_encap lacks tx statistics.

 

Definitely please submit a patch.

 [Hongjun] OK.  I will submit a patch.

 

(5).Within create_nsh_vxlan_gpe_tunnel_command,

VLIB_CLI_COMMAND (create_nsh_vxlan_gpe_tunnel_command, static) = {

  .path = "nsh vxlan tunnel",

  .short_help =

  "nsh vxlan tunnel src <ip4-addr> dst <ip4-addr>"

  "    c1 <nn> c2 <nn> c3 <nn> c4 <nn> spi <nn> si <nn> vni <nn>\n"

  "    [encap-fib-id <nn>] [decap-fib-id <nn>] [o-bit <1|0>] [c-bit <1|0>]\n"

  "    [md-type <nn>][next-ip4][next-ip6][next-ethernet][next-nsh]\n"

  "    [tlv <xx>][decap-next [ip4|ip6|ethernet|nsh-encap|nsh-proxy]][del]\n",

  .function = nsh_vxlan_gpe_add_del_tunnel_command_fn,

 

I think

[encap-fib-id <nn>] should be [encap-vrf-id <nn>]

[decap-fib-id <nn>] should be [decap-vrf-id <nn>]

 

Can you please explain more? A VRF is a very explicit implementation detail. That is to say, VRF = Virtual Routing and Forwarding. The "Routing" part is RIB (or Routing Information Base) and the Forwarding part is FIB (or Forwarding Information Base). 

 

I know Damjan is looking into adding RIB to VNET as its quite useful. For now though, I think FIB is still appropriate.

 

 [Hongjun] The short_help indicates that uses should use "    [encap-fib-id <nn>] [decap-fib-id <nn>].

But within nsh_vxlan_gpe_add_del_tunnel_command_fn():, it parses input parameters: encap-vrf-id and decap-vrf-id, then converts to encap_fib_index and decap_fib_index.

When I created a tunnel with encap_fib_id and decap-fib-id, vpp datapath failed to do ipv4 lookup.

When using encap-vrf-id and decap-vrf-id, the following ipv4 lookup worked well.

 

    else if (unformat (line_input, "encap-vrf-id %d", &tmp))

      {

        encap_fib_index = fib_index_from_fib_id (tmp);

        if (encap_fib_index == ~0)

          return clib_error_return (0, "nonexistent encap fib id %d", tmp);

      }

    else if (unformat (line_input, "decap-vrf-id %d", &tmp))

      {

        decap_fib_index = fib_index_from_fib_id (tmp);

        if (decap_fib_index == ~0)

          return clib_error_return (0, "nonexistent decap fib id %d", tmp);

      }

 

 

 

Best Regards,

Hongjun


_______________________________________________
vpp-dev mailing list
vpp-dev@...
https://lists.fd.io/mailman/listinfo/vpp-dev