Internet Engineering Task Force (IETF)                  D. Joachimpillai
Request for Comments: 8013                                       Verizon
Category: Standards Track                                  J. Hadi Salim
ISSN: 2070-1721                                        Mojatatu Networks
                                                           February 2017
        
Internet Engineering Task Force (IETF)                  D. Joachimpillai
Request for Comments: 8013                                       Verizon
Category: Standards Track                                  J. Hadi Salim
ISSN: 2070-1721                                        Mojatatu Networks
                                                           February 2017
        

Forwarding and Control Element Separation (ForCES) Inter-FE Logical Functional Block (LFB)

Abstract

This document describes how to extend the Forwarding and Control Element Separation (ForCES) Logical Functional Block (LFB) topology across Forwarding Elements (FEs) by defining the inter-FE LFB class. The inter-FE LFB class provides the ability to pass data and metadata across FEs without needing any changes to the ForCES specification. The document focuses on Ethernet transport.

Status of This Memo

This is an Internet Standards Track document.

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841.

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc8013.

Copyright Notice

Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Terminology and Conventions . . . . . . . . . . . . . . . . .   3
     2.1.  Requirements Language . . . . . . . . . . . . . . . . . .   3
     2.2.  Definitions . . . . . . . . . . . . . . . . . . . . . . .   3
   3.  Problem Scope and Use Cases . . . . . . . . . . . . . . . . .   4
     3.1.  Assumptions . . . . . . . . . . . . . . . . . . . . . . .   4
     3.2.  Sample Use Cases  . . . . . . . . . . . . . . . . . . . .   4
       3.2.1.  Basic IPv4 Router . . . . . . . . . . . . . . . . . .   4
         3.2.1.1.  Distributing the Basic IPv4 Router  . . . . . . .   6
       3.2.2.  Arbitrary Network Function  . . . . . . . . . . . . .   7
         3.2.2.1.  Distributing the Arbitrary Network Function . . .   8
   4.  Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . .   8
     4.1.  Inserting the Inter-FE LFB  . . . . . . . . . . . . . . .   8
   5.  Inter-FE Ethernet Connectivity  . . . . . . . . . . . . . . .  10
     5.1.  Inter-FE Ethernet Connectivity Issues . . . . . . . . . .  10
       5.1.1.  MTU Consideration . . . . . . . . . . . . . . . . . .  10
       5.1.2.  Quality-of-Service Considerations . . . . . . . . . .  11
       5.1.3.  Congestion Considerations . . . . . . . . . . . . . .  11
     5.2.  Inter-FE Ethernet Encapsulation . . . . . . . . . . . . .  12
   6.  Detailed Description of the Ethernet Inter-FE LFB . . . . . .  13
     6.1.  Data Handling . . . . . . . . . . . . . . . . . . . . . .  13
       6.1.1.  Egress Processing . . . . . . . . . . . . . . . . . .  14
       6.1.2.  Ingress Processing  . . . . . . . . . . . . . . . . .  15
     6.2.  Components  . . . . . . . . . . . . . . . . . . . . . . .  16
     6.3.  Inter-FE LFB XML Model  . . . . . . . . . . . . . . . . .  17
   7.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .  21
   8.  IEEE Assignment Considerations  . . . . . . . . . . . . . . .  21
   9.  Security Considerations . . . . . . . . . . . . . . . . . . .  22
   10. References  . . . . . . . . . . . . . . . . . . . . . . . . .  23
     10.1.  Normative References . . . . . . . . . . . . . . . . . .  23
     10.2.  Informative References . . . . . . . . . . . . . . . . .  24
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  25
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  25
        
   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Terminology and Conventions . . . . . . . . . . . . . . . . .   3
     2.1.  Requirements Language . . . . . . . . . . . . . . . . . .   3
     2.2.  Definitions . . . . . . . . . . . . . . . . . . . . . . .   3
   3.  Problem Scope and Use Cases . . . . . . . . . . . . . . . . .   4
     3.1.  Assumptions . . . . . . . . . . . . . . . . . . . . . . .   4
     3.2.  Sample Use Cases  . . . . . . . . . . . . . . . . . . . .   4
       3.2.1.  Basic IPv4 Router . . . . . . . . . . . . . . . . . .   4
         3.2.1.1.  Distributing the Basic IPv4 Router  . . . . . . .   6
       3.2.2.  Arbitrary Network Function  . . . . . . . . . . . . .   7
         3.2.2.1.  Distributing the Arbitrary Network Function . . .   8
   4.  Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . .   8
     4.1.  Inserting the Inter-FE LFB  . . . . . . . . . . . . . . .   8
   5.  Inter-FE Ethernet Connectivity  . . . . . . . . . . . . . . .  10
     5.1.  Inter-FE Ethernet Connectivity Issues . . . . . . . . . .  10
       5.1.1.  MTU Consideration . . . . . . . . . . . . . . . . . .  10
       5.1.2.  Quality-of-Service Considerations . . . . . . . . . .  11
       5.1.3.  Congestion Considerations . . . . . . . . . . . . . .  11
     5.2.  Inter-FE Ethernet Encapsulation . . . . . . . . . . . . .  12
   6.  Detailed Description of the Ethernet Inter-FE LFB . . . . . .  13
     6.1.  Data Handling . . . . . . . . . . . . . . . . . . . . . .  13
       6.1.1.  Egress Processing . . . . . . . . . . . . . . . . . .  14
       6.1.2.  Ingress Processing  . . . . . . . . . . . . . . . . .  15
     6.2.  Components  . . . . . . . . . . . . . . . . . . . . . . .  16
     6.3.  Inter-FE LFB XML Model  . . . . . . . . . . . . . . . . .  17
   7.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .  21
   8.  IEEE Assignment Considerations  . . . . . . . . . . . . . . .  21
   9.  Security Considerations . . . . . . . . . . . . . . . . . . .  22
   10. References  . . . . . . . . . . . . . . . . . . . . . . . . .  23
     10.1.  Normative References . . . . . . . . . . . . . . . . . .  23
     10.2.  Informative References . . . . . . . . . . . . . . . . .  24
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  25
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  25
        
1. Introduction

In the ForCES architecture, a packet service can be modeled by composing a graph of one or more LFB instances. The reader is referred to the details in the ForCES model [RFC5812].

The ForCES model describes the processing within a single Forwarding Element (FE) in terms of Logical Functional Blocks (LFBs), including provision for the Control Element (CE) to establish and modify that processing sequence, and the parameters of the individual LFBs.

Under some circumstances, it would be beneficial to be able to extend this view and the resulting processing across more than one FE. This may be in order to achieve scale by splitting the processing across elements or to utilize specialized hardware available on specific FEs.

Given that the ForCES inter-LFB architecture calls for the ability to pass metadata between LFBs, it is imperative to define mechanisms to extend that existing feature and allow passing the metadata between LFBs across FEs.

This document describes how to extend the LFB topology across FEs, i.e., inter-FE connectivity without needing any changes to the ForCES definitions. It focuses on using Ethernet as the interconnection between FEs.

2. Terminology and Conventions
2.1. Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

2.2. Definitions

This document depends on the terms (below) defined in several ForCES documents: [RFC3746], [RFC5810], [RFC5811], [RFC5812], [RFC7391], and [RFC7408].

Control Element (CE)

Forwarding Element (FE)

FE Model

LFB (Logical Functional Block) Class (or type)

LFB Instance

LFB Model

LFB Metadata

ForCES Component

LFB Component

ForCES Protocol Layer (ForCES PL)

ForCES Protocol Transport Mapping Layer (ForCES TML)

3. Problem Scope and Use Cases

The scope of this document is to solve the challenge of passing ForCES-defined metadata alongside packet data across FEs (be they physical or virtual) for the purpose of distributing the LFB processing.

3.1. Assumptions

o The FEs involved in the inter-FE LFB belong to the same Network Element (NE) and are within a single administrative private network that is in close proximity.

o The FEs are already interconnected using Ethernet. We focus on Ethernet because it is commonly used for FE interconnection. Other higher transports (such as UDP over IP) or lower transports could be defined to carry the data and metadata, but these cases are not addressed in this document.

3.2. Sample Use Cases

To illustrate the problem scope, we present two use cases where we start with a single FE running all the LFBs functionality and then split it into multiple FEs achieving the same end goals.

3.2.1. Basic IPv4 Router

A sample LFB topology depicted in Figure 1 demonstrates a service graph for delivering a basic IPv4-forwarding service within one FE. For the purpose of illustration, the diagram shows LFB classes as graph nodes instead of multiple LFB class instances.

Since the purpose of the illustration in Figure 1 is to showcase how data and metadata are sent down or upstream on a graph of LFB instances, it abstracts out any ports in both directions and talks about a generic ingress and egress LFB. Again, for illustration purposes, the diagram does not show exception or error paths. Also left out are details on Reverse Path Filtering, ECMP, multicast handling, etc. In other words, this is not meant to be a complete description of an IPv4-forwarding application; for a more complete example, please refer to the LFBLibrary document [RFC6956].

The output of the ingress LFB(s) coming into the IPv4 Validator LFB will have both the IPv4 packets and, depending on the implementation,

a variety of ingress metadata such as offsets into the different headers, any classification metadata, physical and virtual ports encountered, tunneling information, etc. These metadata are lumped together as "ingress metadata".

Once the IPv4 validator vets the packet (for example, it ensures that there is no expired TTL), it feeds the packet and inherited metadata into the IPv4 unicast LPM (Longest-Prefix-Matching) LFB.

                      +----+
                      |    |
           IPv4 pkt   |    | IPv4 pkt     +-----+             +---+
       +------------->|    +------------->|     |             |   |
       |  + ingress   |    | + ingress    |IPv4 |   IPv4 pkt  |   |
       |   metadata   |    | metadata     |Ucast+------------>|   +--+
       |              +----+              |LPM  |  + ingress  |   |  |
     +-+-+             IPv4               +-----+  + NHinfo   +---+  |
     |   |             Validator                   metadata   IPv4   |
     |   |             LFB                                    NextHop|
     |   |                                                     LFB   |
     |   |                                                           |
     |   |                                                  IPv4 pkt |
     |   |                                               + {ingress  |
     +---+                                                  + NHdetails}
     Ingress                                                metadata |
      LFB                                +--------+                  |
                                         | Egress |                  |
                                      <--+        |<-----------------+
                                         |  LFB   |
                                         +--------+
        
                      +----+
                      |    |
           IPv4 pkt   |    | IPv4 pkt     +-----+             +---+
       +------------->|    +------------->|     |             |   |
       |  + ingress   |    | + ingress    |IPv4 |   IPv4 pkt  |   |
       |   metadata   |    | metadata     |Ucast+------------>|   +--+
       |              +----+              |LPM  |  + ingress  |   |  |
     +-+-+             IPv4               +-----+  + NHinfo   +---+  |
     |   |             Validator                   metadata   IPv4   |
     |   |             LFB                                    NextHop|
     |   |                                                     LFB   |
     |   |                                                           |
     |   |                                                  IPv4 pkt |
     |   |                                               + {ingress  |
     +---+                                                  + NHdetails}
     Ingress                                                metadata |
      LFB                                +--------+                  |
                                         | Egress |                  |
                                      <--+        |<-----------------+
                                         |  LFB   |
                                         +--------+
        

Figure 1: Basic IPv4 Packet Service LFB Topology

The IPv4 unicast LPM LFB does an LPM lookup on the IPv4 FIB using the destination IP address as a search key. The result is typically a next-hop selector, which is passed downstream as metadata.

The NextHop LFB receives the IPv4 packet with associated next-hop (NH) information metadata. The NextHop LFB consumes the NH information metadata and derives a table index from it to look up the next-hop table in order to find the appropriate egress information. The lookup result is used to build the next-hop details to be used downstream on the egress. This information may include any source and destination information (for our purposes, which Media Access Control (MAC) addresses to use) as well as egress ports. (Note: It is also at this LFB where typically, the forwarding TTL-decrementing and IP checksum recalculation occurs.)

The details of the egress LFB are considered out of scope for this discussion. Suffice it to say that somewhere within or beyond the Egress LFB, the IPv4 packet will be sent out a port (e.g., Ethernet, virtual or physical).

3.2.1.1. Distributing the Basic IPv4 Router

Figure 2 demonstrates one way that the router LFB topology in Figure 1 may be split across two FEs (e.g., two Application-Specific Integrated Circuits (ASICs)). Figure 2 shows the LFB topology split across FEs after the IPv4 unicast LPM LFB.

      FE1
    +-------------------------------------------------------------+
    |                            +----+                           |
    | +----------+               |    |                           |
    | | Ingress  |    IPv4 pkt   |    | IPv4 pkt     +-----+      |
    | |  LFB     +-------------->|    +------------->|     |      |
    | |          |  + ingress    |    | + ingress    |IPv4 |      |
    | +----------+    metadata   |    |   metadata   |Ucast|      |
    |      ^                     +----+              |LPM  |      |
    |      |                      IPv4               +--+--+      |
    |      |                     Validator              |         |
    |                             LFB                   |         |
    +---------------------------------------------------|---------+
                                                        |
                                                   IPv4 packet +
                                                 {ingress + NHinfo}
                                                     metadata
      FE2                                               |
    +---------------------------------------------------|---------+
    |                                                   V         |
    |             +--------+                       +--------+     |
    |             | Egress |     IPv4 packet       | IPv4   |     |
    |       <-----+  LFB   |<----------------------+NextHop |     |
    |             |        |{ingress + NHdetails}  | LFB    |     |
    |             +--------+      metadata         +--------+     |
    +-------------------------------------------------------------+
        
      FE1
    +-------------------------------------------------------------+
    |                            +----+                           |
    | +----------+               |    |                           |
    | | Ingress  |    IPv4 pkt   |    | IPv4 pkt     +-----+      |
    | |  LFB     +-------------->|    +------------->|     |      |
    | |          |  + ingress    |    | + ingress    |IPv4 |      |
    | +----------+    metadata   |    |   metadata   |Ucast|      |
    |      ^                     +----+              |LPM  |      |
    |      |                      IPv4               +--+--+      |
    |      |                     Validator              |         |
    |                             LFB                   |         |
    +---------------------------------------------------|---------+
                                                        |
                                                   IPv4 packet +
                                                 {ingress + NHinfo}
                                                     metadata
      FE2                                               |
    +---------------------------------------------------|---------+
    |                                                   V         |
    |             +--------+                       +--------+     |
    |             | Egress |     IPv4 packet       | IPv4   |     |
    |       <-----+  LFB   |<----------------------+NextHop |     |
    |             |        |{ingress + NHdetails}  | LFB    |     |
    |             +--------+      metadata         +--------+     |
    +-------------------------------------------------------------+
        

Figure 2: Split IPv4 Packet Service LFB Topology

Some proprietary interconnections (for example, Broadcom HiGig over XAUI [brcm-higig]) are known to exist to carry both the IPv4 packet and the related metadata between the IPv4 Unicast LFB and IPv4NextHop LFB across the two FEs.

This document defines the inter-FE LFB, a standard mechanism for encapsulating, generating, receiving, and decapsulating packets and associated metadata FEs over Ethernet.

3.2.2. Arbitrary Network Function

In this section, we show an example of an arbitrary Network Function that is more coarsely grained in terms of functionality. Each Network Function may constitute more than one LFB.

在本节中,我们将展示一个任意网络函数的示例,该函数在功能方面的粒度更粗。每个网络功能可以构成多个LFB。

      FE1
    +-------------------------------------------------------------+
    |                            +----+                           |
    | +----------+               |    |                           |
    | | Network  |   pkt         |NF2 |    pkt       +-----+      |
    | | Function +-------------->|    +------------->|     |      |
    | |    1     |  + NF1        |    | + NF1/2      |NF3  |      |
    | +----------+    metadata   |    |   metadata   |     |      |
    |      ^                     +----+              |     |      |
    |      |                                         +--+--+      |
    |      |                                            |         |
    |                                                   |         |
    +---------------------------------------------------|---------+
                                                        V
        
      FE1
    +-------------------------------------------------------------+
    |                            +----+                           |
    | +----------+               |    |                           |
    | | Network  |   pkt         |NF2 |    pkt       +-----+      |
    | | Function +-------------->|    +------------->|     |      |
    | |    1     |  + NF1        |    | + NF1/2      |NF3  |      |
    | +----------+    metadata   |    |   metadata   |     |      |
    |      ^                     +----+              |     |      |
    |      |                                         +--+--+      |
    |      |                                            |         |
    |                                                   |         |
    +---------------------------------------------------|---------+
                                                        V
        

Figure 3: A Network Function Service Chain within One FE

图3:一个FE内的网络功能服务链

The setup in Figure 3 is typical of most packet processing boxes where we have functions like deep packet inspection (DPI), NAT, Routing, etc., connected in such a topology to deliver a packet processing service to flows.

图3中的设置是大多数数据包处理盒的典型设置,其中我们具有深度数据包检查(DPI)、NAT、路由等功能,以这种拓扑结构连接,以向流提供数据包处理服务。

3.2.2.1. Distributing the Arbitrary Network Function
3.2.2.1. 分布任意网络函数

The setup in Figure 3 can be split across three FEs instead of as demonstrated in Figure 4. This could be motivated by scale-out reasons or because different vendors provide different functionality, which is plugged-in to provide such functionality. The end result is having the same packet service delivered to the different flows passing through.

图3中的设置可以拆分为三个FEs,而不是如图4所示。这可能是由于扩展的原因,或者是因为不同的供应商提供了不同的功能,而这些功能是通过插入来提供这些功能的。最终的结果是将相同的数据包服务交付给通过的不同流。

      FE1                        FE2
      +----------+               +----+               FE3
      | Network  |   pkt         |NF2 |    pkt       +-----+
      | Function +-------------->|    +------------->|     |
      |    1     |  + NF1        |    | + NF1/2      |NF3  |
      +----------+    metadata   |    |   metadata   |     |
           ^                     +----+              |     |
           |                                         +--+--+
                                                        |
                                                        V
        
      FE1                        FE2
      +----------+               +----+               FE3
      | Network  |   pkt         |NF2 |    pkt       +-----+
      | Function +-------------->|    +------------->|     |
      |    1     |  + NF1        |    | + NF1/2      |NF3  |
      +----------+    metadata   |    |   metadata   |     |
           ^                     +----+              |     |
           |                                         +--+--+
                                                        |
                                                        V
        

Figure 4: A Network Function Service Chain Distributed across Multiple FEs

图4:分布在多个FEs上的网络功能服务链

4. Inter-FE LFB Overview
4. 内部FE LFB概述

We address the inter-FE connectivity requirements by defining the inter-FE LFB class. Using a standard LFB class definition implies no change to the basic ForCES architecture in the form of the core LFBs (FE Protocol or Object LFBs). This design choice was made after considering an alternative approach that would have required changes to both the FE Object capabilities (SupportedLFBs) and the LFBTopology component to describe the inter-FE connectivity capabilities as well as the runtime topology of the LFB instances.

我们通过定义FE间LFB类来满足FE间连接性要求。使用标准LFB类定义意味着不会以核心LFB(FE协议或对象LFB)的形式更改基本部队体系结构。该设计选择是在考虑了一种替代方法后做出的,该方法需要对FE对象功能(SupportedLFB)和LFB拓扑组件进行更改,以描述FE间连接功能以及LFB实例的运行时拓扑。

4.1. Inserting the Inter-FE LFB ne 15
4.1. 插入内部FE LFB ne 15

The distributed LFB topology described in Figure 2 is re-illustrated in Figure 5 to show the topology location where the inter-FE LFB would fit in.

图2中描述的分布式LFB拓扑在图5中重新说明,以显示FE间LFB将适合的拓扑位置。

As can be observed in Figure 5, the same details passed between IPv4 unicast LPM LFB and the IPv4 NH LFB are passed to the egress side of the inter-FE LFB. This information is illustrated as multiplicity of inputs into the egress inter-FE LFB instance. Each input represents a unique set of selection information.

如图5所示,IPv4单播LPM LFB和IPv4 NH LFB之间传递的相同细节被传递到FE间LFB的出口侧。该信息被说明为进入出口FE-LFB实例的输入的多重性。每个输入代表一组唯一的选择信息。

      FE1
    +-------------------------------------------------------------+
    | +----------+               +----+                           |
    | | Ingress  |    IPv4 pkt   |    | IPv4 pkt     +-----+      |
    | |  LFB     +-------------->|    +------------->|     |      |
    | |          |  + ingress    |    | + ingress    |IPv4 |      |
    | +----------+    metadata   |    |   metadata   |Ucast|      |
    |      ^                     +----+              |LPM  |      |
    |      |                      IPv4               +--+--+      |
    |      |                     Validator              |         |
    |      |                      LFB                   |         |
    |      |                                  IPv4 pkt + metadata |
    |      |                                   {ingress + NHinfo} |
    |      |                                            |         |
    |      |                                       +..--+..+      |
    |      |                                       |..| |  |      |
    |                                            +-V--V-V--V-+    |
    |                                            |   Egress  |    |
    |                                            |  Inter-FE |    |
    |                                            |   LFB     |    |
    |                                            +------+----+    |
    +---------------------------------------------------|---------+
                                                        |
                                Ethernet Frame with:    |
                                IPv4 packet data and metadata
                                {ingress + NHinfo + Inter-FE info}
     FE2                                                |
    +---------------------------------------------------|---------+
    |                                                +..+.+..+    |
    |                                                |..|.|..|    |
    |                                              +-V--V-V--V-+  |
    |                                              | Ingress   |  |
    |                                              | Inter-FE  |  |
    |                                              |   LFB     |  |
    |                                              +----+------+  |
    |                                                   |         |
    |                                         IPv4 pkt + metadata |
    |                                          {ingress + NHinfo} |
    |                                                   |         |
    |             +--------+                       +----V---+     |
    |             | Egress |     IPv4 packet       | IPv4   |     |
    |       <-----+  LFB   |<----------------------+NextHop |     |
    |             |        |{ingress + NHdetails}  | LFB    |     |
    |             +--------+      metadata         +--------+     |
    +-------------------------------------------------------------+
        
      FE1
    +-------------------------------------------------------------+
    | +----------+               +----+                           |
    | | Ingress  |    IPv4 pkt   |    | IPv4 pkt     +-----+      |
    | |  LFB     +-------------->|    +------------->|     |      |
    | |          |  + ingress    |    | + ingress    |IPv4 |      |
    | +----------+    metadata   |    |   metadata   |Ucast|      |
    |      ^                     +----+              |LPM  |      |
    |      |                      IPv4               +--+--+      |
    |      |                     Validator              |         |
    |      |                      LFB                   |         |
    |      |                                  IPv4 pkt + metadata |
    |      |                                   {ingress + NHinfo} |
    |      |                                            |         |
    |      |                                       +..--+..+      |
    |      |                                       |..| |  |      |
    |                                            +-V--V-V--V-+    |
    |                                            |   Egress  |    |
    |                                            |  Inter-FE |    |
    |                                            |   LFB     |    |
    |                                            +------+----+    |
    +---------------------------------------------------|---------+
                                                        |
                                Ethernet Frame with:    |
                                IPv4 packet data and metadata
                                {ingress + NHinfo + Inter-FE info}
     FE2                                                |
    +---------------------------------------------------|---------+
    |                                                +..+.+..+    |
    |                                                |..|.|..|    |
    |                                              +-V--V-V--V-+  |
    |                                              | Ingress   |  |
    |                                              | Inter-FE  |  |
    |                                              |   LFB     |  |
    |                                              +----+------+  |
    |                                                   |         |
    |                                         IPv4 pkt + metadata |
    |                                          {ingress + NHinfo} |
    |                                                   |         |
    |             +--------+                       +----V---+     |
    |             | Egress |     IPv4 packet       | IPv4   |     |
    |       <-----+  LFB   |<----------------------+NextHop |     |
    |             |        |{ingress + NHdetails}  | LFB    |     |
    |             +--------+      metadata         +--------+     |
    +-------------------------------------------------------------+
        

Figure 5: Split IPv4-Forwarding Service with Inter-FE LFB

图5:使用FE间LFB的拆分IPv4转发服务

The egress of the inter-FE LFB uses the received packet and metadata to select details for encapsulation when sending messages towards the selected neighboring FE. These details include what to communicate as the source and destination FEs (abstracted as MAC addresses as described in Section 5.2); in addition, the original metadata may be passed along with the original IPv4 packet.

FE间LFB的出口使用所接收的分组和元数据来选择在向所选择的相邻FE发送消息时用于封装的细节。这些细节包括作为源和目标FEs的通信内容(抽象为MAC地址,如第5.2节所述);此外,原始元数据可以与原始IPv4数据包一起传递。

On the ingress side of the inter-FE LFB, the received packet and its associated metadata are used to decide the packet graph continuation. This includes which of the original metadata and on which next LFB class instance to continue processing. In Figure 5, an IPv4NextHop LFB instance is selected and the appropriate metadata is passed to it.

在FE间LFB的入口侧,使用接收到的分组及其相关元数据来决定分组图的延续。这包括要继续处理的原始元数据和下一个LFB类实例。在图5中,选择了一个IPv4NextHop LFB实例,并将适当的元数据传递给它。

The ingress side of the inter-FE LFB consumes some of the information passed and passes it the IPv4 packet alongside with the ingress and NHinfo metadata to the IPv4NextHop LFB as was done earlier in both Figures 1 and 2.

FE间LFB的入口端使用一些传递的信息,并将IPv4数据包连同入口和NHinfo元数据一起传递给IPv4NextHop LFB,如图1和图2中所示。

5. Inter-FE Ethernet Connectivity
5. 以太网互连

Section 5.1 describes some of the issues related to using Ethernet as the transport and how we mitigate them.

第5.1节描述了与使用以太网作为传输相关的一些问题,以及我们如何缓解这些问题。

Section 5.2 defines a payload format that is to be used over Ethernet. An existing implementation of this specification that runs on top of Linux Traffic Control [linux-tc] is described in [tc-ife].

第5.2节定义了通过以太网使用的有效负载格式。[tc ife]中描述了在Linux流量控制[Linux tc]之上运行的本规范的现有实现。

5.1. Inter-FE Ethernet Connectivity Issues
5.1. FE间以太网连接问题

There are several issues that may occur due to using direct Ethernet encapsulation that need consideration.

由于使用直接以太网封装,可能会出现一些需要考虑的问题。

5.1.1. MTU Consideration
5.1.1. MTU对价

Because we are adding data to existing Ethernet frames, MTU issues may arise. We recommend:

因为我们正在向现有以太网帧添加数据,所以可能会出现MTU问题。我们建议:

o Using large MTUs when possible (example with jumbo frames).

o 尽可能使用大型MTU(例如巨型帧)。

o Limiting the amount of metadata that could be transmitted; our definition allows for filtering of select metadata to be encapsulated in the frame as described in Section 6. We recommend sizing the egress port MTU so as to allow space for maximum size of the metadata total size to allow between FEs. In such a setup, the port is configured to "lie" to the upper layers by claiming to have a lower MTU than it is capable of. Setting the MTU can be achieved by ForCES control of the port LFB (or some other

o 限制可以传输的元数据的数量;我们的定义允许筛选要封装在框架中的选定元数据,如第6节所述。我们建议调整出口端口MTU的大小,以便为FEs之间允许的元数据总大小的最大大小留出空间。在这样的设置中,端口通过声称其MTU低于其能力而被配置为“位于”上层。设置MTU可以通过对端口LFB(或其他一些)的强制控制来实现

configuration. In essence, the control plane when explicitly making a decision for the MTU settings of the egress port is implicitly deciding how much metadata will be allowed. Caution needs to be exercised on how low the resulting reported link MTU could be: for IPv4 packets, the minimum size is 64 octets [RFC791] and for IPv6 the minimum size is 1280 octets [RFC2460].

配置本质上,当显式地决定出口端口的MTU设置时,控制平面隐式地决定将允许多少元数据。需要注意生成的报告链路MTU可能有多低:对于IPv4数据包,最小大小为64个八位字节[RFC791],对于IPv6,最小大小为1280个八位字节[RFC2460]。

5.1.2. Quality-of-Service Considerations
5.1.2. 服务质素考虑

A raw packet arriving at the inter-FE LFB (from upstream LFB class instances) may have Class-of-Service (CoS) metadata indicating how it should be treated from a Quality-of-Service perspective.

到达FE间LFB的原始分组(来自上游LFB类实例)可以具有服务类(CoS)元数据,该元数据指示从服务质量角度应如何处理该原始分组。

The resulting Ethernet frame will be eventually (preferentially) treated by a downstream LFB (typically a port LFB instance) and their CoS marks will be honored in terms of priority. In other words, the presence of the inter-FE LFB does not change the CoS semantics.

最终产生的以太网帧将(优先)由下游LFB(通常是端口LFB实例)处理,并且它们的CoS标记将在优先级方面得到尊重。换句话说,FE间LFB的存在不会改变CoS语义。

5.1.3. Congestion Considerations
5.1.3. 交通挤塞考虑

Most of the traffic passing through FEs that utilize the inter-FE LFB is expected to be IP based, which is generally assumed to be congestion controlled [UDP-GUIDE]. For example, if congestion causes a TCP packet annotated with additional ForCES metadata to be dropped between FEs, the sending TCP can be expected to react in the same fashion as if that packet had been dropped at a different point on its path where ForCES is not involved. For this reason, additional inter-FE congestion-control mechanisms are not specified.

通过使用FE间LFB的FEs的大多数流量预计是基于IP的,这通常被认为是拥塞控制的[UDP-GUIDE]。例如,如果拥塞导致在FEs之间丢弃带有附加ForCES元数据注释的TCP包,则发送TCP可以预期以相同的方式作出反应,就像该包在其路径上不涉及ForCES的不同点被丢弃一样。因此,未指定额外的FE间拥塞控制机制。

However, the increased packet size due to the addition of ForCES metadata is likely to require additional bandwidth on inter-FE links in comparison to what would be required to carry the same traffic without ForCES metadata. Therefore, traffic engineering SHOULD be done when deploying inter-FE encapsulation.

然而,与在没有ForCES元数据的情况下承载相同流量所需的带宽相比,由于添加ForCES元数据而增加的数据包大小可能需要额外的FE间链路带宽。因此,在部署内部FE封装时,应该进行流量工程。

Furthermore, the inter-FE LFB MUST only be deployed within a single network (with a single network operator) or networks of an adjacent set of cooperating network operators where traffic is managed to avoid congestion. These are Controlled Environments, as defined by Section 3.6 of [UDP-GUIDE]. Additional measures SHOULD be imposed to restrict the impact of inter-FE-encapsulated traffic on other traffic; for example:

此外,FE间LFB必须仅部署在单个网络(具有单个网络运营商)或相邻一组合作网络运营商的网络中,其中流量被管理以避免拥塞。这些是受控制的环境,如[UDP-GUIDE]第3.6节所定义。应采取额外措施,限制内部交通对其他交通的影响;例如:

o rate-limiting all inter-FE LFB traffic at an upstream LFB

o 速率限制上游LFB的所有FE间LFB流量

o managing circuit breaking [circuit-b]

o 管理电路断路[电路b]

o Isolating the inter-FE traffic either via dedicated interfaces or VLANs

o 通过专用接口或VLAN隔离FE间通信

5.2. Inter-FE Ethernet Encapsulation
5.2. 以太网间封装

The Ethernet wire encapsulation is illustrated in Figure 6. The process that leads to this encapsulation is described in Section 6. The resulting frame is 32-bit aligned.

以太网线封装如图6所示。第6节描述了导致这种封装的过程。生成的帧是32位对齐的。

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Destination MAC Address                                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Destination MAC Address       |   Source MAC Address          |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Source MAC Address                                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Inter-FE ethertype            | Metadata length               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Original packet data ~~................~~                     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
        
       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Destination MAC Address                                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Destination MAC Address       |   Source MAC Address          |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Source MAC Address                                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Inter-FE ethertype            | Metadata length               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Original packet data ~~................~~                     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
        

Figure 6: Packet Format Definition

图6:数据包格式定义

The Ethernet header (illustrated in Figure 6) has the following semantics:

以太网报头(如图6所示)具有以下语义:

o The Destination MAC Address is used to identify the Destination FEID by the CE policy (as described in Section 6).

o 目的地MAC地址用于通过CE策略识别目的地FEID(如第6节所述)。

o The Source MAC Address is used to identify the Source FEID by the CE policy (as described in Section 6).

o 源MAC地址用于根据CE策略识别源FEID(如第6节所述)。

o The ethertype is used to identify the frame as inter-FE LFB type. Ethertype ED3E (base 16) is to be used.

o ethertype用于将帧标识为帧间LFB类型。将使用乙醚型ED3E(碱16)。

o The 16-bit metadata length is used to describe the total encoded metadata length (including the 16 bits used to encode the metadata length).

o 16位元数据长度用于描述编码的元数据总长度(包括用于编码元数据长度的16位)。

o One or more 16-bit TLV-encoded metadatum follows the Metadata length field. The TLV type identifies the metadata ID. ForCES metadata IDs that have been registered with IANA will be used.

o 元数据长度字段后面有一个或多个16位TLV编码的元数据。TLV类型标识元数据ID。强制使用已向IANA注册的元数据ID。

All TLVs will be 32-bit-aligned. We recognize that using a 16-bit TLV restricts the metadata ID to 16 bits instead of a ForCES-defined component ID space of 32 bits if an Index-Length-Value (ILV) is used. However, at the time of publication, we believe this is sufficient to carry all the information we need; the TLV approach has been selected because it saves us 4 bytes per metadatum transferred as compared to the ILV approach.

All TLVs will be 32-bit-aligned. We recognize that using a 16-bit TLV restricts the metadata ID to 16 bits instead of a ForCES-defined component ID space of 32 bits if an Index-Length-Value (ILV) is used. However, at the time of publication, we believe this is sufficient to carry all the information we need; the TLV approach has been selected because it saves us 4 bytes per metadatum transferred as compared to the ILV approach.translate error, please retry

o The original packet data payload is appended at the end of the metadata as shown.

o 如图所示,原始分组数据有效负载附加在元数据的末尾。

6. Detailed Description of the Ethernet Inter-FE LFB
6. 以太网内部FE LFB的详细说明

The Ethernet inter-FE LFB has two LFB input port groups and three LFB output ports as shown in Figure 7.

Ethernet inter FE LFB有两个LFB输入端口组和三个LFB输出端口,如图7所示。

The inter-FE LFB defines two components used in aiding processing described in Section 6.1.

FE间LFB定义了用于第6.1节所述辅助处理的两个组件。

                    +-----------------+
     Inter-FE LFB   |                 |
     Encapsulated   |             OUT2+--> Decapsulated Packet
     -------------->|IngressInGroup   |       + metadata
     Ethernet Frame |                 |
                    |                 |
     raw Packet +   |             OUT1+--> Encapsulated Ethernet
     -------------->|EgressInGroup    |           Frame
     Metadata       |                 |
                    |    EXCEPTIONOUT +--> ExceptionID, packet
                    |                 |           + metadata
                    +-----------------+
        
                    +-----------------+
     Inter-FE LFB   |                 |
     Encapsulated   |             OUT2+--> Decapsulated Packet
     -------------->|IngressInGroup   |       + metadata
     Ethernet Frame |                 |
                    |                 |
     raw Packet +   |             OUT1+--> Encapsulated Ethernet
     -------------->|EgressInGroup    |           Frame
     Metadata       |                 |
                    |    EXCEPTIONOUT +--> ExceptionID, packet
                    |                 |           + metadata
                    +-----------------+
        

Figure 7: Inter-FE LFB

图7:铁间LFB

6.1. Data Handling
6.1. 数据处理

The inter-FE LFB (instance) can be positioned at the egress of a source FE. Figure 5 illustrates an example source FE in the form of FE1. In such a case, an inter-FE LFB instance receives, via port group EgressInGroup, a raw packet and associated metadata from the preceding LFB instances. The input information is used to produce a selection of how to generate and encapsulate the new frame. The set of all selections is stored in the LFB component IFETable described further below. The processed encapsulated Ethernet frame will go out on OUT1 to a downstream LFB instance when processing succeeds or to the EXCEPTIONOUT port in the case of failure.

FE间LFB(实例)可定位在源FE的出口处。图5显示了FE1形式的示例源FE。在这种情况下,FE间LFB实例通过端口组exgressingroup从前面的LFB实例接收原始分组和相关联的元数据。输入信息用于选择如何生成和封装新帧。所有选择的集合存储在下面进一步描述的LFB组件IFETable中。处理成功时,已处理的封装以太网帧将在OUT1上发送到下游LFB实例,或者在处理失败时发送到EXCEPTIONOUT端口。

The inter-FE LFB (instance) can be positioned at the ingress of a receiving FE. Figure 5 illustrates an example destination FE in the form of FE1. In such a case, an inter-FE LFB receives, via an LFB port in the IngressInGroup, an encapsulated Ethernet frame. Successful processing of the packet will result in a raw packet with associated metadata IDs going downstream to an LFB connected on OUT2. On failure, the data is sent out EXCEPTIONOUT.

FE间LFB(实例)可定位在接收FE的入口处。图5以FE1的形式展示了一个示例目的地FE。在这种情况下,FE间LFB通过入口组中的LFB端口接收封装的以太网帧。成功处理该数据包将导致具有相关元数据ID的原始数据包流向下游连接到OUT2上的LFB。出现故障时,数据会被发送出去,例外情况除外。

6.1.1. Egress Processing
6.1.1. 出口处理

The egress inter-FE LFB receives packet data and any accompanying metadatum at an LFB port of the LFB instance's input port group labeled EgressInGroup.

出口FE间LFB在LFB实例的输入端口组(标记为出口组)的LFB端口处接收分组数据和任何伴随的元数据。

The LFB implementation may use the incoming LFB port (within the LFB port group EgressInGroup) to map to a table index used to look up the IFETable table.

LFB实现可以使用传入LFB端口(在LFB端口组EgressingGroup内)映射到用于查找IFETable表的表索引。

If the lookup is successful, a matched table row that has the IFEInfo details is retrieved with the tuple (optional IFETYPE, optional StatId, Destination MAC address (DSTFE), Source MAC address (SRCFE), and optional metafilters). The metafilters lists define a whitelist of which metadatum are to be passed to the neighboring FE. The inter-FE LFB will perform the following actions using the resulting tuple:

如果查找成功,将使用元组(可选的IFETYPE、可选的StatId、目标MAC地址(DSTFE)、源MAC地址(SRCFE)和可选的元过滤器)检索具有IFEInfo详细信息的匹配表行。图元过滤器列表定义了一个白名单,其中图元数据将传递给相邻FE。FE间LFB将使用生成的元组执行以下操作:

o Increment statistics for packet and byte count observed at the corresponding IFEStats entry.

o 在相应IFEStats条目上观察到的数据包和字节计数的增量统计信息。

o When the MetaFilterList is present, walk each received metadatum and apply it against the MetaFilterList. If no legitimate metadata is found that needs to be passed downstream, then the processing stops and the packet and metadata are sent out the EXCEPTIONOUT port with the exceptionID of EncapTableLookupFailed [RFC6956].

o 当元过滤器列表存在时,遍历每个接收到的元数据并将其应用于元过滤器列表。如果未找到需要向下游传递的合法元数据,则处理将停止,并将数据包和元数据发送到EXCEPTIONOUT端口,exceptionID为EncapTableLookupFailed[RFC6956]。

o Check that the additional overhead of the Ethernet header and encapsulated metadata will not exceed MTU. If it does, increment the error-packet-count statistics and send the packet and metadata out the EXCEPTIONOUT port with the exceptionID of FragRequired [RFC6956].

o 检查以太网报头和封装元数据的额外开销是否不会超过MTU。如果是,则增加错误数据包计数统计,并将数据包和元数据发送到EXCEPTIONOUT端口,exceptionID为FragRequired[RFC6956]。

o Create the Ethernet header.

o 创建以太网报头。

o Set the Destination MAC address of the Ethernet header with the value found in the DSTFE field.

o 使用DSTFE字段中的值设置以太网报头的目标MAC地址。

o Set the Source MAC address of the Ethernet header with the value found in the SRCFE field.

o 使用SRCFE字段中的值设置以太网报头的源MAC地址。

o If the optional IFETYPE is present, set the ethertype to the value found in IFETYPE. If IFETYPE is absent, then the standard inter-FE LFB ethertype ED3E (base 16) is used.

o 如果存在可选的IFETYPE,请将ethertype设置为IFETYPE中的值。如果缺少IFETYPE,则使用标准的inter-FE LFB ethertype ED3E(基16)。

o Encapsulate each allowed metadatum in a TLV. Use the metaID as the "type" field in the TLV header. The TLV should be aligned to 32 bits. This means you may need to add a padding of zeroes at the end of the TLV to ensure alignment.

o 将每个允许的元数据封装在TLV中。使用metaID作为TLV标题中的“类型”字段。TLV应与32位对齐。这意味着您可能需要在TLV的末尾添加一个零填充,以确保对齐。

o Update the metadata length to the sum of each TLV's space plus 2 bytes (a 16-bit space for the Metadata length field).

o 将元数据长度更新为每个TLV的空间加上2个字节的总和(元数据长度字段的16位空间)。

The resulting packet is sent to the next LFB instance connected to the OUT1 LFB-port, typically a port LFB.

生成的数据包被发送到连接到OUT1 LFB端口(通常是端口LFB)的下一个LFB实例。

In the case of a failed lookup, the original packet and associated metadata is sent out the EXCEPTIONOUT port with the exceptionID of EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB port is merely an abstraction and implementation may in fact drop packets as described above.

在查找失败的情况下,原始数据包和相关元数据将发送到EXCEPTIONOUT端口,exceptionID为EncapTableLookupFailed[RFC6956]。注意,EXCEPTIONOUT LFB端口仅仅是一个抽象,实现实际上可能如上所述丢弃数据包。

6.1.2. Ingress Processing
6.1.2. 入口处理

An ingressing inter-FE LFB packet is recognized by inspecting the ethertype, and optionally the destination and source MAC addresses. A matching packet is mapped to an LFB instance port in the IngressInGroup. The IFETable table row entry matching the LFB instance port may have optionally programmed metadata filters. In such a case, the ingress processing should use the metadata filters as a whitelist of what metadatum is to be allowed.

通过检查ethertype以及可选的目标和源MAC地址来识别进入的FE间LFB数据包。匹配的数据包映射到IngressingGroup中的LFB实例端口。与LFB实例端口匹配的IFETable表行条目可能有可选的编程元数据过滤器。在这种情况下,入口处理应使用元数据过滤器作为允许的元数据的白名单。

o Increment statistics for packet and byte count observed.

o 观察到的数据包和字节计数的增量统计信息。

o Look at the metadata length field and walk the packet data, extracting the metadata values from the TLVs. For each metadatum extracted, in the presence of metadata filters, the metaID is compared against the relevant IFETable row metafilter list. If the metadatum is recognized and allowed by the filter, the corresponding implementation Metadatum field is set. If an unknown metadatum ID is encountered or if the metaID is not in the allowed filter list, then the implementation is expected to ignore it, increment the packet error statistic, and proceed processing other metadatum.

o 查看metadata length字段并遍历数据包数据,从TLV中提取元数据值。对于提取的每个元数据,在存在元数据过滤器的情况下,将metaID与相关的IFETable行元过滤器列表进行比较。如果过滤器识别并允许元基准,则设置相应的实现元基准字段。如果遇到未知的元数据ID,或者元数据ID不在允许的筛选器列表中,则实现将忽略它,增加数据包错误统计,然后继续处理其他元数据。

o Upon completion of processing all the metadata, the inter-FE LFB instance resets the data point to the original payload (i.e., skips the IFE header information). At this point, the original packet that was passed to the egress inter-FE LFB at the source FE is reconstructed. This data is then passed along with the reconstructed metadata downstream to the next LFB instance in the graph.

o 在处理完所有元数据后,FE间LFB实例将数据点重置为原始有效负载(即,跳过IFE头信息)。此时,重构传递到源FE处的出口FE间LFB的原始分组。然后将该数据与重建的元数据一起传递到图中的下一个LFB实例下游。

In the case of a processing failure of either ingress or egress positioning of the LFB, the packet and metadata are sent out the EXCEPTIONOUT LFB port with the appropriate error ID. Note that the EXCEPTIONOUT LFB port is merely an abstraction and implementation may in fact drop packets as described above.

在LFB的入口或出口定位的处理失败的情况下,数据包和元数据被发送到带有适当错误ID的EXCEPTIONOUT LFB端口。注意,EXCEPTIONOUT LFB端口仅仅是一个抽象,实现实际上可能如上所述丢弃数据包。

6.2. Components
6.2. 组件

There are two LFB components accessed by the CE. The reader is asked to refer to the definitions in Figure 8.

CE可访问两个LFB组件。要求读者参考图8中的定义。

The first component, populated by the CE, is an array known as the "IFETable" table. The array rows are made up of IFEInfo structure. The IFEInfo structure constitutes the optional IFETYPE, the optionally present StatId, the Destination MAC address (DSTFE), the Source MAC address (SRCFE), and an optionally present array of allowed metaIDs (MetaFilterList).

第一个组件由CE填充,是一个称为“IFETable”表的数组。数组行由IFEInfo结构组成。IFEInfo结构由可选的IFETYPE、可选的present StatId、目标MAC地址(DSTFE)、源MAC地址(SRCFE)和可选的允许metaid数组(MetaFilterList)组成。

The second component (ID 2), populated by the FE and read by the CE, is an indexed array known as the "IFEStats" table. Each IFEStats row carries statistics information in the structure bstats.

由FE填充并由CE读取的第二个组件(ID 2)是一个索引数组,称为“IFEStats”表。每个IFEStats行都包含结构bstats中的统计信息。

A note about the StatId relationship between the IFETable table and the IFEStats table -- an implementation may choose to map between an IFETable row and IFEStats table row using the StatId entry in the matching IFETable row. In that case, the IFETable StatId must be present. An alternative implementation may map an IFETable row to an IFEStats table row at provisioning time. Yet another alternative implementation may choose not to use the IFETable row StatId and instead use the IFETable row index as the IFEStats index. For these reasons, the StatId component is optional.

关于IFETable表和IFEStats表之间的StatId关系的说明——实现可以选择使用匹配IFETable行中的StatId条目在IFETable行和IFEStats表行之间映射。在这种情况下,IFETable StatId必须存在。另一种实现可以在设置时将IFETable行映射到IFEStats表行。另一个替代实现可能选择不使用IFETable行StatId,而是使用IFETable行索引作为IFEStats索引。由于这些原因,StatId组件是可选的。

6.3. Inter-FE LFB XML Model
6.3. Inter-FE-LFB-XML模型
  <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         provides="IFE">
    <frameDefs>
       <frameDef>
           <name>PacketAny</name>
            <synopsis>Arbitrary Packet</synopsis>
       </frameDef>
       <frameDef>
           <name>InterFEFrame</name>
           <synopsis>
                   Ethernet frame with encapsulated IFE information
           </synopsis>
       </frameDef>
        
  <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         provides="IFE">
    <frameDefs>
       <frameDef>
           <name>PacketAny</name>
            <synopsis>Arbitrary Packet</synopsis>
       </frameDef>
       <frameDef>
           <name>InterFEFrame</name>
           <synopsis>
                   Ethernet frame with encapsulated IFE information
           </synopsis>
       </frameDef>
        
    </frameDefs>
        
    </frameDefs>
        

<dataTypeDefs>

<dataTypeDefs>

      <dataTypeDef>
         <name>bstats</name>
         <synopsis>Basic stats</synopsis>
      <struct>
          <component componentID="1">
           <name>bytes</name>
           <synopsis>The total number of bytes seen</synopsis>
           <typeRef>uint64</typeRef>
          </component>
        
      <dataTypeDef>
         <name>bstats</name>
         <synopsis>Basic stats</synopsis>
      <struct>
          <component componentID="1">
           <name>bytes</name>
           <synopsis>The total number of bytes seen</synopsis>
           <typeRef>uint64</typeRef>
          </component>
        
          <component componentID="2">
           <name>packets</name>
           <synopsis>The total number of packets seen</synopsis>
           <typeRef>uint32</typeRef>
          </component>
        
          <component componentID="2">
           <name>packets</name>
           <synopsis>The total number of packets seen</synopsis>
           <typeRef>uint32</typeRef>
          </component>
        
          <component componentID="3">
           <name>errors</name>
           <synopsis>The total number of packets with errors</synopsis>
           <typeRef>uint32</typeRef>
          </component>
      </struct>
        
          <component componentID="3">
           <name>errors</name>
           <synopsis>The total number of packets with errors</synopsis>
           <typeRef>uint32</typeRef>
          </component>
      </struct>
        
     </dataTypeDef>
        
     </dataTypeDef>
        
       <dataTypeDef>
          <name>IFEInfo</name>
          <synopsis>Describing IFE table row Information</synopsis>
          <struct>
             <component componentID="1">
               <name>IFETYPE</name>
               <synopsis>
                   The ethertype to be used for outgoing IFE frame
               </synopsis>
               <optional/>
               <typeRef>uint16</typeRef>
             </component>
             <component componentID="2">
               <name>StatId</name>
               <synopsis>
                   The Index into the stats table
               </synopsis>
               <optional/>
               <typeRef>uint32</typeRef>
             </component>
             <component componentID="3">
               <name>DSTFE</name>
               <synopsis>
                       The destination MAC address of the destination FE
               </synopsis>
               <typeRef>byte[6]</typeRef>
             </component>
             <component componentID="4">
               <name>SRCFE</name>
               <synopsis>
                       The source MAC address used for the source FE
               </synopsis>
               <typeRef>byte[6]</typeRef>
             </component>
             <component componentID="5">
               <name>MetaFilterList</name>
               <synopsis>
                       The allowed metadata filter table
               </synopsis>
               <optional/>
               <array type="variable-size">
                 <typeRef>uint32</typeRef>
               </array>
              </component>
        
       <dataTypeDef>
          <name>IFEInfo</name>
          <synopsis>Describing IFE table row Information</synopsis>
          <struct>
             <component componentID="1">
               <name>IFETYPE</name>
               <synopsis>
                   The ethertype to be used for outgoing IFE frame
               </synopsis>
               <optional/>
               <typeRef>uint16</typeRef>
             </component>
             <component componentID="2">
               <name>StatId</name>
               <synopsis>
                   The Index into the stats table
               </synopsis>
               <optional/>
               <typeRef>uint32</typeRef>
             </component>
             <component componentID="3">
               <name>DSTFE</name>
               <synopsis>
                       The destination MAC address of the destination FE
               </synopsis>
               <typeRef>byte[6]</typeRef>
             </component>
             <component componentID="4">
               <name>SRCFE</name>
               <synopsis>
                       The source MAC address used for the source FE
               </synopsis>
               <typeRef>byte[6]</typeRef>
             </component>
             <component componentID="5">
               <name>MetaFilterList</name>
               <synopsis>
                       The allowed metadata filter table
               </synopsis>
               <optional/>
               <array type="variable-size">
                 <typeRef>uint32</typeRef>
               </array>
              </component>
        
          </struct>
       </dataTypeDef>
        
          </struct>
       </dataTypeDef>
        
    </dataTypeDefs>
        
    </dataTypeDefs>
        
    <LFBClassDefs>
      <LFBClassDef LFBClassID="18">
        <name>IFE</name>
        <synopsis>
           This LFB describes IFE connectivity parameterization
        </synopsis>
        <version>1.0</version>
        
    <LFBClassDefs>
      <LFBClassDef LFBClassID="18">
        <name>IFE</name>
        <synopsis>
           This LFB describes IFE connectivity parameterization
        </synopsis>
        <version>1.0</version>
        

<inputPorts>

<inputPorts>

            <inputPort group="true">
             <name>EgressInGroup</name>
             <synopsis>
                     The input port group of the egress side.
                     It expects any type of Ethernet frame.
             </synopsis>
             <expectation>
                  <frameExpected>
                  <ref>PacketAny</ref>
                  </frameExpected>
             </expectation>
            </inputPort>
        
            <inputPort group="true">
             <name>EgressInGroup</name>
             <synopsis>
                     The input port group of the egress side.
                     It expects any type of Ethernet frame.
             </synopsis>
             <expectation>
                  <frameExpected>
                  <ref>PacketAny</ref>
                  </frameExpected>
             </expectation>
            </inputPort>
        
            <inputPort  group="true">
             <name>IngressInGroup</name>
             <synopsis>
                     The input port group of the ingress side.
                     It expects an interFE-encapsulated Ethernet frame.
              </synopsis>
             <expectation>
                  <frameExpected>
                  <ref>InterFEFrame</ref>
                  </frameExpected>
             </expectation>
          </inputPort>
        
            <inputPort  group="true">
             <name>IngressInGroup</name>
             <synopsis>
                     The input port group of the ingress side.
                     It expects an interFE-encapsulated Ethernet frame.
              </synopsis>
             <expectation>
                  <frameExpected>
                  <ref>InterFEFrame</ref>
                  </frameExpected>
             </expectation>
          </inputPort>
        
         </inputPorts>
        
         </inputPorts>
        

<outputPorts>

<outputPorts>

           <outputPort>
             <name>OUT1</name>
             <synopsis>
                  The output port of the egress side
             </synopsis>
        
           <outputPort>
             <name>OUT1</name>
             <synopsis>
                  The output port of the egress side
             </synopsis>
        
             <product>
                <frameProduced>
                  <ref>InterFEFrame</ref>
                </frameProduced>
             </product>
          </outputPort>
        
             <product>
                <frameProduced>
                  <ref>InterFEFrame</ref>
                </frameProduced>
             </product>
          </outputPort>
        
          <outputPort>
            <name>OUT2</name>
            <synopsis>
                The output port of the Ingress side
            </synopsis>
            <product>
               <frameProduced>
                 <ref>PacketAny</ref>
               </frameProduced>
            </product>
         </outputPort>
        
          <outputPort>
            <name>OUT2</name>
            <synopsis>
                The output port of the Ingress side
            </synopsis>
            <product>
               <frameProduced>
                 <ref>PacketAny</ref>
               </frameProduced>
            </product>
         </outputPort>
        
         <outputPort>
           <name>EXCEPTIONOUT</name>
           <synopsis>
              The exception handling path
           </synopsis>
           <product>
              <frameProduced>
                <ref>PacketAny</ref>
              </frameProduced>
              <metadataProduced>
                <ref>ExceptionID</ref>
              </metadataProduced>
           </product>
        </outputPort>
        
         <outputPort>
           <name>EXCEPTIONOUT</name>
           <synopsis>
              The exception handling path
           </synopsis>
           <product>
              <frameProduced>
                <ref>PacketAny</ref>
              </frameProduced>
              <metadataProduced>
                <ref>ExceptionID</ref>
              </metadataProduced>
           </product>
        </outputPort>
        
     </outputPorts>
        
     </outputPorts>
        

<components>

<组件>

        <component componentID="1" access="read-write">
           <name>IFETable</name>
           <synopsis>
              The table of all inter-FE relations
           </synopsis>
           <array type="variable-size">
              <typeRef>IFEInfo</typeRef>
           </array>
        </component>
        
        <component componentID="1" access="read-write">
           <name>IFETable</name>
           <synopsis>
              The table of all inter-FE relations
           </synopsis>
           <array type="variable-size">
              <typeRef>IFEInfo</typeRef>
           </array>
        </component>
        
       <component componentID="2" access="read-only">
         <name>IFEStats</name>
         <synopsis>
          The stats corresponding to the IFETable table
         </synopsis>
         <typeRef>bstats</typeRef>
       </component>
    </components>
        
       <component componentID="2" access="read-only">
         <name>IFEStats</name>
         <synopsis>
          The stats corresponding to the IFETable table
         </synopsis>
         <typeRef>bstats</typeRef>
       </component>
    </components>
        
   </LFBClassDef>
  </LFBClassDefs>
        
   </LFBClassDef>
  </LFBClassDefs>
        
  </LFBLibrary>
        
  </LFBLibrary>
        

Figure 8: Inter-FE LFB XML

图8:FE间LFB XML

7. IANA Considerations
7. IANA考虑

IANA has registered the following LFB class name in the "Logical Functional Block (LFB) Class Names and Class Identifiers" subregistry of the "Forwarding and Control Element Separation (ForCES)" registry <https://www.iana.org/assignments/forces>.

IANA已在“转发和控制元素分离(ForCES)”注册表的“逻辑功能块(LFB)类名和类标识符”子区中注册了以下LFB类名<https://www.iana.org/assignments/forces>.

   +------------+--------+---------+-----------------------+-----------+
   | LFB Class  |  LFB   |   LFB   |      Description      | Reference |
   | Identifier | Class  | Version |                       |           |
   |            |  Name  |         |                       |           |
   +------------+--------+---------+-----------------------+-----------+
   |     18     |  IFE   |   1.0   |     An IFE LFB to     |    This   |
   |            |        |         |  standardize inter-FE |  document |
   |            |        |         |     LFB for ForCES    |           |
   |            |        |         |    Network Elements   |           |
   +------------+--------+---------+-----------------------+-----------+
        
   +------------+--------+---------+-----------------------+-----------+
   | LFB Class  |  LFB   |   LFB   |      Description      | Reference |
   | Identifier | Class  | Version |                       |           |
   |            |  Name  |         |                       |           |
   +------------+--------+---------+-----------------------+-----------+
   |     18     |  IFE   |   1.0   |     An IFE LFB to     |    This   |
   |            |        |         |  standardize inter-FE |  document |
   |            |        |         |     LFB for ForCES    |           |
   |            |        |         |    Network Elements   |           |
   +------------+--------+---------+-----------------------+-----------+
        

Logical Functional Block (LFB) Class Names and Class Identifiers

逻辑功能块(LFB)类名和类标识符

8. IEEE Assignment Considerations
8. IEEE分配注意事项

This memo includes a request for a new Ethernet protocol type as described in Section 5.2.

本备忘录包括对第5.2节所述新以太网协议类型的请求。

9. Security Considerations
9. 安全考虑

The FEs involved in the inter-FE LFB belong to the same NE and are within the scope of a single administrative Ethernet LAN private network. While trust of policy in the control and its treatment in the datapath exists already, an inter-FE LFB implementation SHOULD support security services provided by Media Access Control Security (MACsec) [ieee8021ae]. MACsec is not currently sufficiently widely deployed in traditional packet processing hardware although it is present in newer versions of the Linux kernel (which will be widely deployed) [linux-macsec]. Over time, we expect that most FEs will be able to support MACsec.

FE间LFB中涉及的FE属于同一个网元,并且在单个管理以太网LAN专用网络的范围内。虽然控制中的策略信任及其在数据路径中的处理已经存在,但FE间LFB实现应支持媒体访问控制安全(MACsec)[ieee8021ae]提供的安全服务。目前,MACsec在传统的数据包处理硬件中的部署还不够广泛,尽管它存在于较新版本的Linux内核中(将广泛部署)[Linux MACsec]。随着时间的推移,我们预计大多数FEs将能够支持MACsec。

MACsec provides security services such as a message authentication service and an optional confidentiality service. The services can be configured manually or automatically using the MACsec Key Agreement (MKA) over the IEEE 802.1x [ieee8021x] Extensible Authentication Protocol (EAP) framework. It is expected that FE implementations are going to start with shared keys configured from the control plane but progress to automated key management.

MACsec提供安全服务,如消息身份验证服务和可选的保密服务。可以通过IEEE 802.1x[ieee8021x]可扩展身份验证协议(EAP)框架使用MACsec密钥协议(MKA)手动或自动配置服务。预计FE实现将从从控制平面配置的共享密钥开始,但会发展到自动密钥管理。

The following are the MACsec security mechanisms that need to be in place for the inter-FE LFB:

以下是FE间LFB需要建立的MACsec安全机制:

o Security mechanisms are NE-wide for all FEs. Once the security is turned on, depending upon the chosen security level (e.g., Authentication, Confidentiality), it will be in effect for the inter-FE LFB for the entire duration of the session.

o 安全机制适用于所有FEs的NE范围。一旦打开安全性,根据所选的安全级别(例如,身份验证、机密性),它将在整个会话期间对FE间LFB生效。

o An operator SHOULD configure the same security policies for all participating FEs in the NE cluster. This will ensure uniform operations and avoid unnecessary complexity in policy configuration. In other words, the Security Association Keys (SAKs) should be pre-shared. When using MKA, FEs must identify themselves with a shared Connectivity Association Key (CAK) and Connectivity Association Key Name (CKN). EAP-TLS SHOULD be used as the EAP method.

o 操作员应为网元集群中所有参与的FEs配置相同的安全策略。这将确保统一的操作,并避免策略配置中不必要的复杂性。换句话说,安全关联密钥(sak)应该预先共享。使用MKA时,FEs必须使用共享连接关联密钥(CAK)和连接关联密钥名称(CKN)标识自己。EAP-TLS应作为EAP方法使用。

o An operator SHOULD configure the strict validation mode, i.e., all non-protected, invalid, or non-verifiable frames MUST be dropped.

o 操作员应配置严格验证模式,即必须删除所有不受保护、无效或不可验证的帧。

It should be noted that given the above choices, if an FE is compromised, an entity running on the FE would be able to fake inter-FE or modify its content, causing bad outcomes.

应注意的是,鉴于上述选择,如果FE受到损害,在FE上运行的实体将能够伪造内部FE或修改其内容,从而导致不良结果。

10. References
10. 工具书类
10.1. Normative References
10.1. 规范性引用文件

[ieee8021ae] IEEE, "IEEE Standard for Local and metropolitan area networks Media Access Control (MAC) Security", IEEE 802.1AE-2006, DOI 10.1109/IEEESTD.2006.245590, <http://ieeexplore.ieee.org/document/1678345/>.

[ieee8021ae]IEEE,“局域网和城域网媒体访问控制(MAC)安全的IEEE标准”,IEEE 802.1AE-2006,DOI 10.1109/IEEESTD.2006.245590<http://ieeexplore.ieee.org/document/1678345/>.

[ieee8021x] IEEE, "IEEE Standard for Local and metropolitan area networks - Port-Based Network Access Control.", IEEE 802.1X-2010, DOI 10.1109/IEEESTD.2010.5409813, <http://ieeexplore.ieee.org/document/5409813/>.

[ieee8021x]IEEE,“局域网和城域网的IEEE标准-基于端口的网络访问控制”,IEEE 802.1X-2010,DOI 10.1109/IEEESTD.2010.5409813<http://ieeexplore.ieee.org/document/5409813/>.

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <http://www.rfc-editor.org/info/rfc2119>.

[RFC2119]Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,DOI 10.17487/RFC2119,1997年3月<http://www.rfc-editor.org/info/rfc2119>.

[RFC5810] Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed., Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and J. Halpern, "Forwarding and Control Element Separation (ForCES) Protocol Specification", RFC 5810, DOI 10.17487/RFC5810, March 2010, <http://www.rfc-editor.org/info/rfc5810>.

[RFC5810]Doria,A.,Ed.,Hadi Salim,J.,Ed.,Haas,R.,Ed.,Khosravi,H.,Ed.,Wang,W.,Ed.,Dong,L.,Gopal,R.,和J.Halpern,“转发和控制元件分离(部队)协议规范”,RFC 5810,DOI 10.17487/RFC5810,2010年3月<http://www.rfc-editor.org/info/rfc5810>.

[RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping Layer (TML) for the Forwarding and Control Element Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/RFC5811, March 2010, <http://www.rfc-editor.org/info/rfc5811>.

[RFC5811]Hadi Salim,J.和K.Ogawa,“转发和控制元件分离(部队)协议的基于SCTP的传输映射层(TML)”,RFC 5811,DOI 10.17487/RFC5811,2010年3月<http://www.rfc-editor.org/info/rfc5811>.

[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control Element Separation (ForCES) Forwarding Element Model", RFC 5812, DOI 10.17487/RFC5812, March 2010, <http://www.rfc-editor.org/info/rfc5812>.

[RFC5812]Halpern,J.和J.Hadi Salim,“转发和控制分队(部队)转发分队模型”,RFC 5812,DOI 10.17487/RFC5812,2010年3月<http://www.rfc-editor.org/info/rfc5812>.

[RFC7391] Hadi Salim, J., "Forwarding and Control Element Separation (ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/RFC7391, October 2014, <http://www.rfc-editor.org/info/rfc7391>.

[RFC7391]Hadi Salim,J.,“转发和控制单元分离(部队)协议扩展”,RFC 7391,DOI 10.17487/RFC7391,2014年10月<http://www.rfc-editor.org/info/rfc7391>.

[RFC7408] Haleplidis, E., "Forwarding and Control Element Separation (ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408, November 2014, <http://www.rfc-editor.org/info/rfc7408>.

[RFC7408]Haleplidis,E.“转发和控制单元分离(部队)模型扩展”,RFC 7408,DOI 10.17487/RFC7408,2014年11月<http://www.rfc-editor.org/info/rfc7408>.

10.2. Informative References
10.2. 资料性引用

[brcm-higig] Broadcom, "HiGig", <http://www.broadcom.com/products/ ethernet-communication-and-switching/switching/bcm56720>.

[brcm higig]Broadcom,“higig”<http://www.broadcom.com/products/ 以太网通信和交换/交换/bcm56720>。

[circuit-b] Fairhurst, G., "Network Transport Circuit Breakers", Work in Progress, draft-ietf-tsvwg-circuit-breaker-15, April 2016.

[circuit-b]Fairhurst,G.,“网络传输断路器”,正在进行的工作,草案-ietf-tsvwg-circuit-breaker-15,2016年4月。

[linux-macsec] Dubroca, S., "MACsec: Encryption for the wired LAN", Netdev 11, Feb 2016.

[linux macsec]Dubroca,S.,“macsec:有线局域网的加密”,Netdev 11,2016年2月。

[linux-tc] Hadi Salim, J., "Linux Traffic Control Classifier-Action Subsystem Architecture", Netdev 01, Feb 2015.

[linux tc]Hadi Salim,J.,“linux流量控制分类器操作子系统架构”,Netdev 01,2015年2月。

[RFC791] Postel, J., "Internet Protocol", STD 5, RFC 791, DOI 10.17487/RFC0791, September 1981, <http://www.rfc-editor.org/info/rfc791>.

[RFC791]Postel,J.,“互联网协议”,STD 5,RFC 791,DOI 10.17487/RFC07911981年9月<http://www.rfc-editor.org/info/rfc791>.

[RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, December 1998, <http://www.rfc-editor.org/info/rfc2460>.

[RFC2460]Deering,S.和R.Hinden,“互联网协议,第6版(IPv6)规范”,RFC 2460,DOI 10.17487/RFC2460,1998年12月<http://www.rfc-editor.org/info/rfc2460>.

[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal, "Forwarding and Control Element Separation (ForCES) Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004, <http://www.rfc-editor.org/info/rfc3746>.

[RFC3746]Yang,L.,Dantu,R.,Anderson,T.,和R.Gopal,“前进和控制分队(部队)框架”,RFC 3746,DOI 10.17487/RFC3746,2004年4月<http://www.rfc-editor.org/info/rfc3746>.

[RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J. Halpern, "Forwarding and Control Element Separation (ForCES) Logical Function Block (LFB) Library", RFC 6956, DOI 10.17487/RFC6956, June 2013, <http://www.rfc-editor.org/info/rfc6956>.

[RFC6956]Wang,W.,Haleplidis,E.,Ogawa,K.,Li,C.,和J.Halpern,“转发和控制元件分离(部队)逻辑功能块(LFB)库”,RFC 6956,DOI 10.17487/RFC6956,2013年6月<http://www.rfc-editor.org/info/rfc6956>.

[tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux Traffic Control Classifier-Action Subsystem", Netdev 01, Feb 2015.

[tc ife]Hadi Salim,J.和D.Joachimpillai,“分布式Linux流量控制分类器操作子系统”,Netdev 01,2015年2月。

[UDP-GUIDE] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage Guidelines", Work in Progress, draft-ietf-tsvwg-rfc5405bis-19, October 2016.

[UDP-GUIDE]Eggert,L.,Fairhurst,G.,和G.Shepherd,“UDP使用指南”,正在进行的工作,草案-ietf-tsvwg-rfc5405bis-19,2016年10月。

Acknowledgements

致谢

The authors would like to thank Joel Halpern and Dave Hood for the stimulating discussions. Evangelos Haleplidis shepherded and contributed to improving this document. Alia Atlas was the AD sponsor of this document and did a tremendous job of critiquing it. The authors are grateful to Joel Halpern and Sue Hares in their roles as the Routing Area reviewers for shaping the content of this document. David Black put in a lot of effort to make sure the congestion-control considerations are sane. Russ Housley did the Gen-ART review, Joe Touch did the TSV area review, and Shucheng LIU (Will) did the OPS review. Suresh Krishnan helped us provide clarity during the IESG review. The authors are appreciative of the efforts Stephen Farrell put in to fixing the security section.

作者要感谢Joel Halpern和Dave Hood进行了激动人心的讨论。Evangelos Haleplidis指导并为改进本文件做出了贡献。Alia Atlas是该文件的广告赞助商,并对其进行了大量的批评。作者感谢Joel Halpern和Sue Hares担任路由区域评审员,以形成本文件的内容。大卫·布莱克(David Black)投入了大量努力,以确保拥塞控制考虑是合理的。Russ Housley做了Gen ART审查,Joe Touch做了TSV区域审查,而Shucheng LIU(Will)做了OPS审查。Suresh Krishnan在IESG审查期间帮助我们提供了清晰性。作者对斯蒂芬·法雷尔(Stephen Farrell)为修复安全部分所做的努力表示赞赏。

Authors' Addresses

作者地址

Damascane M. Joachimpillai Verizon 60 Sylvan Rd Waltham, MA 02451 United States of America

美国马萨诸塞州沃尔瑟姆西尔文路60号大马士革M.Joachimpillai Verizon邮编:02451

   Email: damascene.joachimpillai@verizon.com
        
   Email: damascene.joachimpillai@verizon.com
        

Jamal Hadi Salim Mojatatu Networks Suite 200, 15 Fitzgerald Rd. Ottawa, Ontario K2H 9G1 Canada

加拿大安大略省渥太华菲茨杰拉德路15号Jamal Hadi Salim Mojatatu Networks套房200 K2H 9G1

   Email: hadi@mojatatu.com
        
   Email: hadi@mojatatu.com