Internet Engineering Task Force (IETF)                       M. Lasserre
Request for Comments: 7365                                      F. Balus
Category: Informational                                   Alcatel-Lucent
ISSN: 2070-1721                                                 T. Morin
                                                                  Orange
                                                                N. Bitar
                                                                 Verizon
                                                              Y. Rekhter
                                                                 Juniper
                                                            October 2014
        
Internet Engineering Task Force (IETF)                       M. Lasserre
Request for Comments: 7365                                      F. Balus
Category: Informational                                   Alcatel-Lucent
ISSN: 2070-1721                                                 T. Morin
                                                                  Orange
                                                                N. Bitar
                                                                 Verizon
                                                              Y. Rekhter
                                                                 Juniper
                                                            October 2014
        

Framework for Data Center (DC) Network Virtualization

数据中心(DC)网络虚拟化框架

Abstract

摘要

This document provides a framework for Data Center (DC) Network Virtualization over Layer 3 (NVO3) and defines a reference model along with logical components required to design a solution.

本文档提供了第3层(NVO3)上的数据中心(DC)网络虚拟化框架,并定义了参考模型以及设计解决方案所需的逻辑组件。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 5741第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7365.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc7365.

Copyright Notice

版权公告

Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2014 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1. Introduction ....................................................4
      1.1. General Terminology ........................................4
      1.2. DC Network Architecture ....................................7
   2. Reference Models ................................................8
      2.1. Generic Reference Model ....................................8
      2.2. NVE Reference Model .......................................10
      2.3. NVE Service Types .........................................11
           2.3.1. L2 NVE Providing Ethernet LAN-Like Service .........11
           2.3.2. L3 NVE Providing IP/VRF-Like Service ...............11
      2.4. Operational Management Considerations .....................12
   3. Functional Components ..........................................12
      3.1. Service Virtualization Components .........................12
           3.1.1. Virtual Access Points (VAPs) .......................12
           3.1.2. Virtual Network Instance (VNI) .....................12
           3.1.3. Overlay Modules and VN Context .....................14
           3.1.4. Tunnel Overlays and Encapsulation Options ..........14
           3.1.5. Control-Plane Components ...........................14
                  3.1.5.1. Distributed vs. Centralized
                           Control Plane .............................14
                  3.1.5.2. Auto-provisioning and Service Discovery ...15
                  3.1.5.3. Address Advertisement and Tunnel Mapping ..15
                  3.1.5.4. Overlay Tunneling .........................16
      3.2. Multihoming ...............................................16
      3.3. VM Mobility ...............................................17
   4. Key Aspects of Overlay Networks ................................17
      4.1. Pros and Cons .............................................18
      4.2. Overlay Issues to Consider ................................19
           4.2.1. Data Plane vs. Control Plane Driven ................19
           4.2.2. Coordination between Data Plane and Control Plane ..19
           4.2.3. Handling Broadcast, Unknown Unicast, and
                  Multicast (BUM) Traffic ............................20
           4.2.4. Path MTU ...........................................20
           4.2.5. NVE Location Trade-Offs ............................21
           4.2.6. Interaction between Network Overlays and
                  Underlays ..........................................22
   5. Security Considerations ........................................22
   6. Informative References .........................................24
   Acknowledgments ...................................................26
   Authors' Addresses ................................................26
        
   1. Introduction ....................................................4
      1.1. General Terminology ........................................4
      1.2. DC Network Architecture ....................................7
   2. Reference Models ................................................8
      2.1. Generic Reference Model ....................................8
      2.2. NVE Reference Model .......................................10
      2.3. NVE Service Types .........................................11
           2.3.1. L2 NVE Providing Ethernet LAN-Like Service .........11
           2.3.2. L3 NVE Providing IP/VRF-Like Service ...............11
      2.4. Operational Management Considerations .....................12
   3. Functional Components ..........................................12
      3.1. Service Virtualization Components .........................12
           3.1.1. Virtual Access Points (VAPs) .......................12
           3.1.2. Virtual Network Instance (VNI) .....................12
           3.1.3. Overlay Modules and VN Context .....................14
           3.1.4. Tunnel Overlays and Encapsulation Options ..........14
           3.1.5. Control-Plane Components ...........................14
                  3.1.5.1. Distributed vs. Centralized
                           Control Plane .............................14
                  3.1.5.2. Auto-provisioning and Service Discovery ...15
                  3.1.5.3. Address Advertisement and Tunnel Mapping ..15
                  3.1.5.4. Overlay Tunneling .........................16
      3.2. Multihoming ...............................................16
      3.3. VM Mobility ...............................................17
   4. Key Aspects of Overlay Networks ................................17
      4.1. Pros and Cons .............................................18
      4.2. Overlay Issues to Consider ................................19
           4.2.1. Data Plane vs. Control Plane Driven ................19
           4.2.2. Coordination between Data Plane and Control Plane ..19
           4.2.3. Handling Broadcast, Unknown Unicast, and
                  Multicast (BUM) Traffic ............................20
           4.2.4. Path MTU ...........................................20
           4.2.5. NVE Location Trade-Offs ............................21
           4.2.6. Interaction between Network Overlays and
                  Underlays ..........................................22
   5. Security Considerations ........................................22
   6. Informative References .........................................24
   Acknowledgments ...................................................26
   Authors' Addresses ................................................26
        
1. Introduction
1. 介绍

This document provides a framework for Data Center (DC) Network Virtualization over Layer 3 (NVO3) tunnels. This framework is intended to aid in standardizing protocols and mechanisms to support large-scale network virtualization for data centers.

本文档为第3层(NVO3)隧道上的数据中心(DC)网络虚拟化提供了一个框架。该框架旨在帮助标准化协议和机制,以支持数据中心的大规模网络虚拟化。

[RFC7364] defines the rationale for using overlay networks in order to build large multi-tenant data center networks. Compute, storage and network virtualization are often used in these large data centers to support a large number of communication domains and end systems.

[RFC7364]定义了使用覆盖网络构建大型多租户数据中心网络的基本原理。计算、存储和网络虚拟化通常用于这些大型数据中心,以支持大量通信域和终端系统。

This document provides reference models and functional components of data center overlay networks as well as a discussion of technical issues that have to be addressed.

本文档提供了数据中心覆盖网络的参考模型和功能组件,并讨论了必须解决的技术问题。

1.1. General Terminology
1.1. 一般术语

This document uses the following terminology:

本文件使用以下术语:

NVO3 Network: An overlay network that provides a Layer 2 (L2) or Layer 3 (L3) service to Tenant Systems over an L3 underlay network using the architecture and protocols as defined by the NVO3 Working Group.

NVO3网络:使用NVO3工作组定义的架构和协议,通过L3底层网络向租户系统提供第2层(L2)或第3层(L3)服务的覆盖网络。

Network Virtualization Edge (NVE): An NVE is the network entity that sits at the edge of an underlay network and implements L2 and/or L3 network virtualization functions. The network-facing side of the NVE uses the underlying L3 network to tunnel tenant frames to and from other NVEs. The tenant-facing side of the NVE sends and receives Ethernet frames to and from individual Tenant Systems. An NVE could be implemented as part of a virtual switch within a hypervisor, a physical switch or router, or a Network Service Appliance, or it could be split across multiple devices.

网络虚拟化边缘(NVE):网络虚拟化边缘是位于底层网络边缘并实现L2和/或L3网络虚拟化功能的网络实体。NVE面向网络的一侧使用底层L3网络通过隧道将租户帧传送到其他NVE或从其他NVE传送出去。NVE面向租户的一侧向各个租户系统发送和接收以太网帧。NVE可以作为虚拟机监控程序、物理交换机或路由器或网络服务设备中的虚拟交换机的一部分来实现,也可以跨多个设备进行拆分。

Virtual Network (VN): A VN is a logical abstraction of a physical network that provides L2 or L3 network services to a set of Tenant Systems. A VN is also known as a Closed User Group (CUG).

虚拟网络(VN):VN是物理网络的逻辑抽象,为一组租户系统提供L2或L3网络服务。VN也称为封闭用户组(CUG)。

Virtual Network Instance (VNI): A specific instance of a VN from the perspective of an NVE.

虚拟网络实例(VNI):从NVE的角度来看,VN的特定实例。

Virtual Network Context (VN Context) Identifier: Field in an overlay encapsulation header that identifies the specific VN the packet belongs to. The egress NVE uses the VN Context identifier to deliver the packet to the correct Tenant System. The VN Context identifier can be a locally significant identifier or a globally unique identifier.

虚拟网络上下文(VN上下文)标识符:覆盖封装头中的字段,用于标识数据包所属的特定VN。出口NVE使用VN上下文标识符将数据包传递到正确的租户系统。VN上下文标识符可以是本地有效标识符或全局唯一标识符。

Underlay or Underlying Network: The network that provides the connectivity among NVEs and that NVO3 packets are tunneled over, where an NVO3 packet carries an NVO3 overlay header followed by a tenant packet. The underlay network does not need to be aware that it is carrying NVO3 packets. Addresses on the underlay network appear as "outer addresses" in encapsulated NVO3 packets. In general, the underlay network can use a completely different protocol (and address family) from that of the overlay. In the case of NVO3, the underlay network is IP.

底层或底层网络:在NVE之间提供连接且NVO3数据包通过隧道传输的网络,其中NVO3数据包携带NVO3覆盖报头,后跟租户数据包。参考底图网络不需要知道它正在承载NVO3数据包。参考底图网络上的地址在封装的NVO3数据包中显示为“外部地址”。通常,参考底图网络可以使用与覆盖网络完全不同的协议(和地址系列)。对于NVO3,参考底图网络为IP。

Data Center (DC): A physical complex housing physical servers, network switches and routers, network service appliances, and networked storage. The purpose of a data center is to provide application, compute, and/or storage services. One such service is virtualized infrastructure data center services, also known as "Infrastructure as a Service".

数据中心(DC):容纳物理服务器、网络交换机和路由器、网络服务设备和网络存储的物理综合体。数据中心的目的是提供应用程序、计算和/或存储服务。其中一项服务是虚拟化基础设施数据中心服务,也称为“基础设施即服务”。

Virtual Data Center (Virtual DC): A container for virtualized compute, storage, and network services. A virtual DC is associated with a single tenant and can contain multiple VNs and Tenant Systems connected to one or more of these VNs.

虚拟数据中心(Virtual DC):用于虚拟化计算、存储和网络服务的容器。虚拟DC与单个租户关联,可以包含多个VN和连接到其中一个或多个VN的租户系统。

Virtual Machine (VM): A software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Applications (generally) do not know they are running on a VM as opposed to running on a "bare metal" host or server, though some systems provide a para-virtualization environment that allows an operating system or application to be aware of the presence of virtualization for optimization purposes.

虚拟机(VM):一种物理机的软件实现,它运行的程序就像是在一台物理的、非虚拟化的机器上执行的程序一样。应用程序(通常)不知道它们正在VM上运行,而不是在“裸机”主机或服务器上运行,尽管有些系统提供准虚拟化环境,允许操作系统或应用程序意识到虚拟化的存在,以便进行优化。

Hypervisor: Software running on a server that allows multiple VMs to run on the same physical server. The hypervisor manages and provides shared computation, memory, and storage services and network connectivity to the VMs that it hosts. Hypervisors often embed a virtual switch (see below).

虚拟机监控程序:在服务器上运行的软件,允许多个虚拟机在同一物理服务器上运行。虚拟机监控程序管理并提供共享计算、内存和存储服务,以及到其托管的虚拟机的网络连接。虚拟机监控程序通常嵌入一个虚拟交换机(见下文)。

Server: A physical end-host machine that runs user applications. A standalone (or "bare metal") server runs a conventional operating system hosting a single-tenant application. A virtualized server runs a hypervisor supporting one or more VMs.

服务器:运行用户应用程序的物理终端主机。独立(或“裸机”)服务器运行承载单个租户应用程序的传统操作系统。虚拟化服务器运行支持一个或多个虚拟机的虚拟机监控程序。

Virtual Switch (vSwitch): A function within a hypervisor (typically implemented in software) that provides similar forwarding services to a physical Ethernet switch. A vSwitch forwards Ethernet frames between VMs running on the same server or between a VM and a physical Network Interface Card (NIC) connecting the server to a physical Ethernet switch or router. A vSwitch also enforces network isolation between VMs that by policy are not permitted to communicate with each

虚拟交换机(vSwitch):虚拟机监控程序(通常在软件中实现)中的一种功能,它向物理以太网交换机提供类似的转发服务。vSwitch在同一服务器上运行的虚拟机之间或虚拟机与将服务器连接到物理以太网交换机或路由器的物理网络接口卡(NIC)之间转发以太网帧。vSwitch还强制执行策略不允许与每个虚拟机通信的虚拟机之间的网络隔离

other (e.g., by honoring VLANs). A vSwitch may be bypassed when an NVE is enabled on the host server.

其他(例如,通过尊重VLAN)。在主机服务器上启用NVE时,可能会绕过vSwitch。

Tenant: The customer using a virtual network and any associated resources (e.g., compute, storage, and network). A tenant could be an enterprise or a department/organization within an enterprise.

租户:使用虚拟网络和任何相关资源(如计算、存储和网络)的客户。租户可以是企业或企业内的部门/组织。

Tenant System: A physical or virtual system that can play the role of a host or a forwarding element such as a router, switch, firewall, etc. It belongs to a single tenant and connects to one or more VNs of that tenant.

租户系统:一个物理或虚拟系统,可以扮演主机或转发元素(如路由器、交换机、防火墙等)的角色。它属于单个租户,并连接到该租户的一个或多个VN。

Tenant Separation: Refers to isolating traffic of different tenants such that traffic from one tenant is not visible to or delivered to another tenant, except when allowed by policy. Tenant separation also refers to address space separation, whereby different tenants can use the same address space without conflict.

租户分离:是指隔离不同租户的流量,使来自一个租户的流量对另一个租户不可见或不传递给另一个租户,除非策略允许。租户分离也指地址空间分离,不同的租户可以使用相同的地址空间而不发生冲突。

Virtual Access Points (VAPs): A logical connection point on the NVE for connecting a Tenant System to a virtual network. Tenant Systems connect to VNIs at an NVE through VAPs. VAPs can be physical ports or virtual ports identified through logical interface identifiers (e.g., VLAN ID or internal vSwitch Interface ID connected to a VM).

虚拟接入点(VAP):NVE上的逻辑连接点,用于将租户系统连接到虚拟网络。租户系统通过VAPs连接到NVE的VNI。VAP可以是通过逻辑接口标识符(例如,连接到VM的VLAN ID或内部vSwitch接口ID)标识的物理端口或虚拟端口。

End Device: A physical device that connects directly to the DC underlay network. This is in contrast to a Tenant System, which connects to a corresponding tenant VN. An End Device is administered by the DC operator rather than a tenant and is part of the DC infrastructure. An End Device may implement NVO3 technology in support of NVO3 functions. Examples of an End Device include hosts (e.g., server or server blade), storage systems (e.g., file servers and iSCSI storage systems), and network devices (e.g., firewall, load-balancer, and IPsec gateway).

终端设备:直接连接到DC参考底图网络的物理设备。这与租户系统不同,后者连接到相应的租户VN。终端设备由DC运营商而不是租户管理,是DC基础设施的一部分。终端设备可实现NVO3技术以支持NVO3功能。终端设备的示例包括主机(例如,服务器或刀片服务器)、存储系统(例如,文件服务器和iSCSI存储系统)和网络设备(例如,防火墙、负载平衡器和IPsec网关)。

Network Virtualization Authority (NVA): Entity that provides reachability and forwarding information to NVEs.

网络虚拟化机构(NVA):提供可达性并将信息转发给NVE的实体。

1.2. DC Network Architecture
1.2. 直流网络结构

A generic architecture for data centers is depicted in Figure 1:

数据中心的通用体系结构如图1所示:

                                ,---------.
                              ,'           `.
                             (  IP/MPLS WAN )
                              `.           ,'
                                `-+------+'
                                 \      /
                          +--------+   +--------+
                          |   DC   |+-+|   DC   |
                          |gateway |+-+|gateway |
                          +--------+   +--------+
                                |       /
                                .--. .--.
                              (    '    '.--.
                            .-.' Intra-DC     '
                           (     network      )
                            (             .'-'
                             '--'._.'.    )\ \
                             / /     '--'  \ \
                            / /      | |    \ \
                   +--------+   +--------+   +--------+
                   | access |   | access |   | access |
                   | switch |   | switch |   | switch |
                   +--------+   +--------+   +--------+
                      /     \    /    \     /      \
                   __/_      \  /      \   /_      _\__
             '--------'   '--------'   '--------'   '--------'
             :  End   :   :  End   :   :  End   :   :  End   :
             : Device :   : Device :   : Device :   : Device :
             '--------'   '--------'   '--------'   '--------'
        
                                ,---------.
                              ,'           `.
                             (  IP/MPLS WAN )
                              `.           ,'
                                `-+------+'
                                 \      /
                          +--------+   +--------+
                          |   DC   |+-+|   DC   |
                          |gateway |+-+|gateway |
                          +--------+   +--------+
                                |       /
                                .--. .--.
                              (    '    '.--.
                            .-.' Intra-DC     '
                           (     network      )
                            (             .'-'
                             '--'._.'.    )\ \
                             / /     '--'  \ \
                            / /      | |    \ \
                   +--------+   +--------+   +--------+
                   | access |   | access |   | access |
                   | switch |   | switch |   | switch |
                   +--------+   +--------+   +--------+
                      /     \    /    \     /      \
                   __/_      \  /      \   /_      _\__
             '--------'   '--------'   '--------'   '--------'
             :  End   :   :  End   :   :  End   :   :  End   :
             : Device :   : Device :   : Device :   : Device :
             '--------'   '--------'   '--------'   '--------'
        

Figure 1: A Generic Architecture for Data Centers

图1:数据中心的通用体系结构

An example of multi-tier DC network architecture is presented in Figure 1. It provides a view of the physical components inside a DC.

图1给出了多层DC网络体系结构的示例。它提供了DC内部物理组件的视图。

A DC network is usually composed of intra-DC networks and network services, and inter-DC network and network connectivity services.

DC网络通常由DC内部网络和网络服务以及DC内部网络和网络连接服务组成。

DC networking elements can act as strict L2 switches and/or provide IP routing capabilities, including network service virtualization.

DC网络元件可以充当严格的L2交换机和/或提供IP路由功能,包括网络服务虚拟化。

In some DC architectures, some tier layers could provide L2 and/or L3 services. In addition, some tier layers may be collapsed, and Internet connectivity, inter-DC connectivity, and VPN support may be

在某些DC体系结构中,某些层可以提供L2和/或L3服务。此外,某些层可能会被折叠,并且可能需要Internet连接、DC间连接和VPN支持

handled by a smaller number of nodes. Nevertheless, one can assume that the network functional blocks in a DC fit in the architecture depicted in Figure 1.

由较少数量的节点处理。然而,可以假设DC中的网络功能块适合图1所示的架构。

The following components can be present in a DC:

DC中可能存在以下组件:

- Access switch: Hardware-based Ethernet switch aggregating all Ethernet links from the End Devices in a rack representing the entry point in the physical DC network for the hosts. It may also provide routing functionality, virtual IP network connectivity, or Layer 2 tunneling over IP, for instance. Access switches are usually multihomed to aggregation switches in the Intra-DC network. A typical example of an access switch is a Top-of-Rack (ToR) switch. Other deployment scenarios may use an intermediate Blade Switch before the ToR, or an End-of-Row (EoR) switch, to provide similar functions to a ToR.

- 接入交换机:基于硬件的以太网交换机,聚合机架中终端设备的所有以太网链路,代表主机物理直流网络的入口点。例如,它还可以提供路由功能、虚拟IP网络连接或IP上的第2层隧道。接入交换机通常与DC内部网络中的聚合交换机进行多址连接。访问交换机的典型示例是机架顶部(ToR)交换机。其他部署场景可能使用ToR之前的中间刀片交换机或行结束(EoR)交换机来提供与ToR类似的功能。

- Intra-DC Network: Network composed of high-capacity core nodes (Ethernet switches/routers). Core nodes may provide virtual Ethernet bridging and/or IP routing services.

- DC内部网络:由高容量核心节点(以太网交换机/路由器)组成的网络。核心节点可以提供虚拟以太网桥接和/或IP路由服务。

- DC Gateway (DC GW): Gateway to the outside world providing DC interconnect and connectivity to Internet and VPN customers. In the current DC network model, this may be simply a router connected to the Internet and/or an IP VPN/L2VPN PE. Some network implementations may dedicate DC GWs for different connectivity types (e.g., a DC GW for Internet and another for VPN).

- DC网关(DC GW):通向外部世界的网关,为互联网和VPN客户提供DC互连和连接。在当前的DC网络模型中,这可能只是连接到Internet的路由器和/或IP VPN/L2VPN PE。一些网络实施可能将DC GW专用于不同的连接类型(例如,一个DC GW用于互联网,另一个用于VPN)。

Note that End Devices may be single-homed or multihomed to access switches.

请注意,终端设备可以是单宿或多宿接入交换机。

2. Reference Models
2. 参考模型
2.1. Generic Reference Model
2.1. 通用参考模型

Figure 2 depicts a DC reference model for network virtualization overlays where NVEs provide a logical interconnect between Tenant Systems that belong to a specific VN.

图2描述了网络虚拟化覆盖的DC参考模型,其中NVE在属于特定VN的租户系统之间提供逻辑互连。

         +--------+                                    +--------+
         | Tenant +--+                            +----| Tenant |
         | System |  |                           (')   | System |
         +--------+  |    .................     (   )  +--------+
                     |  +---+           +---+    (_)
                     +--|NVE|---+   +---|NVE|-----+
                        +---+   |   |   +---+
                        / .    +-----+      .
                       /  . +--| NVA |--+   .
                      /   . |  +-----+   \  .
                     |    . |             \ .
                     |    . |   Overlay   +--+--++--------+
         +--------+  |    . |   Network   | NVE || Tenant |
         | Tenant +--+    . |             |     || System |
         | System |       .  \ +---+      +--+--++--------+
         +--------+       .....|NVE|.........
                               +---+
                                 |
                                 |
                       =====================
                         |               |
                     +--------+      +--------+
                     | Tenant |      | Tenant |
                     | System |      | System |
                     +--------+      +--------+
        
         +--------+                                    +--------+
         | Tenant +--+                            +----| Tenant |
         | System |  |                           (')   | System |
         +--------+  |    .................     (   )  +--------+
                     |  +---+           +---+    (_)
                     +--|NVE|---+   +---|NVE|-----+
                        +---+   |   |   +---+
                        / .    +-----+      .
                       /  . +--| NVA |--+   .
                      /   . |  +-----+   \  .
                     |    . |             \ .
                     |    . |   Overlay   +--+--++--------+
         +--------+  |    . |   Network   | NVE || Tenant |
         | Tenant +--+    . |             |     || System |
         | System |       .  \ +---+      +--+--++--------+
         +--------+       .....|NVE|.........
                               +---+
                                 |
                                 |
                       =====================
                         |               |
                     +--------+      +--------+
                     | Tenant |      | Tenant |
                     | System |      | System |
                     +--------+      +--------+
        

Figure 2: Generic Reference Model for DC Network Virtualization Overlays

图2:DC网络虚拟化覆盖的通用参考模型

In order to obtain reachability information, NVEs may exchange information directly between themselves via a control-plane protocol. In this case, a control-plane module resides in every NVE.

为了获得可达性信息,NVE可以通过控制平面协议在它们之间直接交换信息。在这种情况下,每个NVE中都有一个控制平面模块。

It is also possible for NVEs to communicate with an external Network Virtualization Authority (NVA) to obtain reachability and forwarding information. In this case, a protocol is used between NVEs and NVA(s) to exchange information.

NVE还可以与外部网络虚拟化机构(NVA)通信,以获取可达性和转发信息。在这种情况下,NVE和NVA之间使用协议交换信息。

It should be noted that NVAs may be organized in clusters for redundancy and scalability and can appear as one logically centralized controller. In this case, inter-NVA communication is necessary to synchronize state among nodes within a cluster or share information across clusters. The information exchanged between NVAs of the same cluster could be different from the information exchanged across clusters.

应该注意的是,为了冗余和可伸缩性,NVA可以被组织在集群中,并且可以作为一个逻辑集中的控制器出现。在这种情况下,需要NVA间通信来同步集群内节点之间的状态或在集群之间共享信息。同一集群的NVA之间交换的信息可能不同于集群之间交换的信息。

A Tenant System can be attached to an NVE in several ways:

租户系统可以通过多种方式连接到NVE:

- locally, by being co-located in the same End Device

- 本地,通过位于同一终端设备中

- remotely, via a point-to-point connection or a switched network

- 远程,通过点对点连接或交换网络

When an NVE is co-located with a Tenant System, the state of the Tenant System can be determined without protocol assistance. For instance, the operational status of a VM can be communicated via a local API. When an NVE is remotely connected to a Tenant System, the state of the Tenant System or NVE needs to be exchanged directly or via a management entity, using a control-plane protocol or API, or directly via a data-plane protocol.

当NVE与租户系统位于同一位置时,可以在没有协议帮助的情况下确定租户系统的状态。例如,VM的运行状态可以通过本地API进行通信。当NVE远程连接到租户系统时,需要直接或通过管理实体、使用控制平面协议或API或直接通过数据平面协议交换租户系统或NVE的状态。

The functional components in Figure 2 do not necessarily map directly to the physical components described in Figure 1. For example, an End Device can be a server blade with VMs and a virtual switch. A VM can be a Tenant System, and the NVE functions may be performed by the host server. In this case, the Tenant System and NVE function are co-located. Another example is the case where the End Device is the Tenant System and the NVE function can be implemented by the connected ToR. In this case, the Tenant System and NVE function are not co-located.

图2中的功能组件不一定直接映射到图1中描述的物理组件。例如,终端设备可以是带有虚拟机和虚拟交换机的服务器刀片。VM可以是租户系统,并且NVE功能可以由主机服务器执行。在这种情况下,租户系统和NVE功能位于同一位置。另一个例子是,终端设备是租户系统,NVE功能可以由连接的ToR实现。在这种情况下,租户系统和NVE功能不在同一位置。

Underlay nodes utilize L3 technologies to interconnect NVE nodes. These nodes perform forwarding based on outer L3 header information, and generally do not maintain state for each tenant service, albeit some applications (e.g., multicast) may require control-plane or forwarding-plane information that pertains to a tenant, group of tenants, tenant service, or a set of services that belong to one or more tenants. Mechanisms to control the amount of state maintained in the underlay may be needed.

参考底图节点利用L3技术互连NVE节点。这些节点基于外部L3报头信息执行转发,并且通常不维护每个租户服务的状态,尽管一些应用程序(例如,多播)可能需要与租户、租户组、租户服务或属于一个或多个租户的一组服务相关的控制平面或转发平面信息。可能需要控制参考底图中保持的状态量的机制。

2.2. NVE Reference Model
2.2. 参考模型

Figure 3 depicts the NVE reference model. One or more VNIs can be instantiated on an NVE. A Tenant System interfaces with a corresponding VNI via a VAP. An overlay module provides tunneling overlay functions (e.g., encapsulation and decapsulation of tenant traffic, tenant identification, and mapping, etc.).

图3描述了NVE参考模型。可以在NVE上实例化一个或多个VNI。租户系统通过VAP与相应的VNI接口。覆盖模块提供隧道覆盖功能(例如,租户流量的封装和去封装、租户标识和映射等)。

                     +-------- L3 Network -------+
                     |                           |
                     |        Tunnel Overlay     |
         +------------+---------+       +---------+------------+
         | +----------+-------+ |       | +---------+--------+ |
         | |  Overlay Module  | |       | |  Overlay Module  | |
         | +---------+--------+ |       | +---------+--------+ |
         |           |VN Context|       | VN Context|          |
         |           |          |       |           |          |
         |  +--------+-------+  |       |  +--------+-------+  |
         |  | |VNI|   .  |VNI|  |       |  | |VNI|   .  |VNI|  |
    NVE1 |  +-+------------+-+  |       |  +-+-----------+--+  | NVE2
         |    |   VAPs     |    |       |    |    VAPs   |     |
         +----+------------+----+       +----+-----------+-----+
              |            |                 |           |
              |            |                 |           |
             Tenant Systems                 Tenant Systems
        
                     +-------- L3 Network -------+
                     |                           |
                     |        Tunnel Overlay     |
         +------------+---------+       +---------+------------+
         | +----------+-------+ |       | +---------+--------+ |
         | |  Overlay Module  | |       | |  Overlay Module  | |
         | +---------+--------+ |       | +---------+--------+ |
         |           |VN Context|       | VN Context|          |
         |           |          |       |           |          |
         |  +--------+-------+  |       |  +--------+-------+  |
         |  | |VNI|   .  |VNI|  |       |  | |VNI|   .  |VNI|  |
    NVE1 |  +-+------------+-+  |       |  +-+-----------+--+  | NVE2
         |    |   VAPs     |    |       |    |    VAPs   |     |
         +----+------------+----+       +----+-----------+-----+
              |            |                 |           |
              |            |                 |           |
             Tenant Systems                 Tenant Systems
        

Figure 3: Generic NVE Reference Model

图3:通用NVE参考模型

Note that some NVE functions (e.g., data-plane and control-plane functions) may reside in one device or may be implemented separately in different devices.

注意,一些NVE功能(例如,数据平面和控制平面功能)可以驻留在一个设备中,或者可以在不同的设备中单独实现。

2.3. NVE Service Types
2.3. NVE服务类型

An NVE provides different types of virtualized network services to multiple tenants, i.e., an L2 service or an L3 service. Note that an NVE may be capable of providing both L2 and L3 services for a tenant. This section defines the service types and associated attributes.

NVE向多个租户提供不同类型的虚拟化网络服务,即二级服务或三级服务。请注意,NVE可能能够为租户提供L2和L3服务。本节定义服务类型和相关属性。

2.3.1. L2 NVE Providing Ethernet LAN-Like Service
2.3.1. L2 NVE提供类似以太网LAN的服务

An L2 NVE implements Ethernet LAN emulation, an Ethernet-based multipoint service similar to an IETF Virtual Private LAN Service (VPLS) [RFC4761][RFC4762] or Ethernet VPN [EVPN] service, where the Tenant Systems appear to be interconnected by a LAN environment over an L3 overlay. As such, an L2 NVE provides per-tenant virtual switching instance (L2 VNI) and L3 (IP/MPLS) tunneling encapsulation of tenant Media Access Control (MAC) frames across the underlay. Note that the control plane for an L2 NVE could be implemented locally on the NVE or in a separate control entity.

L2 NVE实现以太网LAN仿真,这是一种基于以太网的多点服务,类似于IETF虚拟专用LAN服务(VPLS)[RFC4761][RFC4762]或以太网VPN[EVPN]服务,其中租户系统似乎由LAN环境通过L3覆盖互连。因此,L2 NVE提供跨底层的租户媒体访问控制(MAC)帧的每租户虚拟交换实例(L2 VNI)和L3(IP/MPLS)隧道封装。请注意,L2 NVE的控制平面可以在NVE上本地实现,也可以在单独的控制实体中实现。

2.3.2. L3 NVE Providing IP/VRF-Like Service
2.3.2. L3 NVE提供类似IP/VRF的服务

An L3 NVE provides virtualized IP forwarding service, similar to IETF IP VPN (e.g., BGP/MPLS IP VPN [RFC4364]) from a service definition perspective. That is, an L3 NVE provides per-tenant forwarding and

L3 NVE提供虚拟化IP转发服务,从服务定义的角度来看类似于IETF IP VPN(例如,BGP/MPLS IP VPN[RFC4364])。也就是说,L3 NVE提供每租户转发和

routing instance (L3 VNI) and L3 (IP/MPLS) tunneling encapsulation of tenant IP packets across the underlay. Note that routing could be performed locally on the NVE or in a separate control entity.

路由实例(L3 VNI)和L3(IP/MPLS)隧道封装跨参考底图的租户IP数据包。请注意,路由可以在NVE上本地执行,也可以在单独的控制实体中执行。

2.4. Operational Management Considerations
2.4. 业务管理考虑事项

NVO3 services are overlay services over an IP underlay.

NVO3服务是IP参考底图上的覆盖服务。

As far as the IP underlay is concerned, existing IP Operations, Administration, and Maintenance (OAM) facilities are used.

就IP底层而言,使用现有的IP操作、管理和维护(OAM)设施。

With regard to the NVO3 overlay, both L2 and L3 services can be offered. It is expected that existing fault and performance OAM facilities will be used. Sections 4.1 and 4.2.6 provide further discussion of additional fault and performance management issues to consider.

关于NVO3覆盖,可以提供L2和L3服务。预计将使用现有的故障和性能OAM设施。第4.1和4.2.6节进一步讨论了额外的故障和性能管理问题。

As far as configuration is concerned, the DC environment is driven by the need to bring new services up rapidly and is typically very dynamic, specifically in the context of virtualized services. It is therefore critical to automate the configuration of NVO3 services.

就配置而言,DC环境是由快速提供新服务的需求驱动的,并且通常是非常动态的,特别是在虚拟化服务的环境中。因此,自动化NVO3服务的配置至关重要。

3. Functional Components
3. 功能部件

This section decomposes the network virtualization architecture into the functional components described in Figure 3 to make it easier to discuss solution options for these components.

本节将网络虚拟化体系结构分解为图3中描述的功能组件,以便于讨论这些组件的解决方案选项。

3.1. Service Virtualization Components
3.1. 服务虚拟化组件
3.1.1. Virtual Access Points (VAPs)
3.1.1. 虚拟接入点(VAP)

Tenant Systems are connected to VNIs through Virtual Access Points (VAPs).

租户系统通过虚拟接入点(VAP)连接到VNI。

VAPs can be physical ports or virtual ports identified through logical interface identifiers (e.g., VLAN ID and internal vSwitch Interface ID connected to a VM).

VAP可以是通过逻辑接口标识符(例如,连接到VM的VLAN ID和内部vSwitch接口ID)标识的物理端口或虚拟端口。

3.1.2. Virtual Network Instance (VNI)
3.1.2. 虚拟网络实例(VNI)

A VNI is a specific VN instance on an NVE. Each VNI defines a forwarding context that contains reachability information and policies.

VNI是NVE上的特定VN实例。每个VNI定义一个包含可达性信息和策略的转发上下文。

3.1.3. Overlay Modules and VN Context
3.1.3. 覆盖模块与VN上下文

Mechanisms for identifying each tenant service are required to allow the simultaneous overlay of multiple tenant services over the same underlay L3 network topology. In the data plane, each NVE, upon sending a tenant packet, must be able to encode the VN Context for the destination NVE in addition to the L3 tunneling information (e.g., source IP address identifying the source NVE and the destination IP address identifying the destination NVE, or MPLS label). This allows the destination NVE to identify the tenant service instance and therefore appropriately process and forward the tenant packet.

需要识别每个租户服务的机制,以允许在同一参考底图L3网络拓扑上同时覆盖多个租户服务。在数据平面中,每个NVE在发送租户数据包时,除了L3隧道信息(例如,标识源NVE的源IP地址和标识目标NVE的目标IP地址,或MPLS标签)外,还必须能够编码目标NVE的VN上下文。这允许目标NVE识别租户服务实例,从而适当地处理和转发租户数据包。

The overlay module provides tunneling overlay functions: tunnel initiation/termination as in the case of stateful tunnels (see Section 3.1.4) and/or encapsulation/decapsulation of frames from the VAPs/L3 underlay.

覆盖模块提供隧道覆盖功能:如有状态隧道(见第3.1.4节)情况下的隧道启动/终止和/或VAPs/L3底层框架的封装/去封装。

In a multi-tenant context, tunneling aggregates frames from/to different VNIs. Tenant identification and traffic demultiplexing are based on the VN Context identifier.

在多租户上下文中,隧道将帧从不同的VNI聚合到不同的VNI。租户标识和流量解复用基于VN上下文标识符。

The following approaches can be considered:

可以考虑以下方法:

- VN Context identifier per Tenant: This is a globally unique (on a per-DC administrative domain) VN identifier used to identify the corresponding VNI. Examples of such identifiers in existing technologies are IEEE VLAN IDs and Service Instance IDs (I-SIDs) that identify virtual L2 domains when using IEEE 802.1Q and IEEE 802.1ah, respectively. Note that multiple VN identifiers can belong to a tenant.

- 每个租户的VN上下文标识符:这是一个全局唯一的(在每个DC管理域上)VN标识符,用于标识相应的VNI。现有技术中此类标识符的示例是IEEE VLAN ID和服务实例ID(I-SID),它们分别在使用IEEE 802.1Q和IEEE 802.1ah时标识虚拟L2域。请注意,多个VN标识符可以属于一个租户。

- One VN Context identifier per VNI: Each VNI value is automatically generated by the egress NVE, or a control plane associated with that NVE, and usually distributed by a control-plane protocol to all the related NVEs. An example of this approach is the use of per-VRF MPLS labels in IP VPN [RFC4364]. The VNI value is therefore locally significant to the egress NVE.

- 每个VNI一个VN上下文标识符:每个VNI值由出口NVE或与该NVE关联的控制平面自动生成,并且通常通过控制平面协议分发给所有相关的NVE。这种方法的一个例子是在IP VPN[RFC4364]中使用每VRF MPLS标签。因此,VNI值对出口NVE具有局部意义。

- One VN Context identifier per VAP: A value locally significant to an NVE is assigned and usually distributed by a control-plane protocol to identify a VAP. An example of this approach is the use of per-CE MPLS labels in IP VPN [RFC4364].

- 每个VAP一个VN上下文标识符:分配一个对NVE具有本地意义的值,通常由控制平面协议分配,以识别VAP。这种方法的一个例子是在IP VPN[RFC4364]中使用每CE MPLS标签。

Note that when using one VN Context per VNI or per VAP, an additional global identifier (e.g., a VN identifier or name) may be used by the control plane to identify the tenant context.

注意,当每个VNI或每个VAP使用一个VN上下文时,控制平面可以使用附加的全局标识符(例如,VN标识符或名称)来标识租户上下文。

3.1.4. Tunnel Overlays and Encapsulation Options
3.1.4. 隧道覆盖层和封装选项

Once the VN Context identifier is added to the frame, an L3 tunnel encapsulation is used to transport the frame to the destination NVE.

将VN上下文标识符添加到帧后,使用L3隧道封装将帧传输到目标NVE。

Different IP tunneling options (e.g., Generic Routing Encapsulation (GRE), the Layer 2 Tunneling Protocol (L2TP), and IPsec) and MPLS tunneling can be used. Tunneling could be stateless or stateful. Stateless tunneling simply entails the encapsulation of a tenant packet with another header necessary for forwarding the packet across the underlay (e.g., IP tunneling over an IP underlay). Stateful tunneling, on the other hand, entails maintaining tunneling state at the tunnel endpoints (i.e., NVEs). Tenant packets on an ingress NVE can then be transmitted over such tunnels to a destination (egress) NVE by encapsulating the packets with a corresponding tunneling header. The tunneling state at the endpoints may be configured or dynamically established. Solutions should specify the tunneling technology used and whether it is stateful or stateless. In this document, however, tunneling and tunneling encapsulation are used interchangeably to simply mean the encapsulation of a tenant packet with a tunneling header necessary to carry the packet between an ingress NVE and an egress NVE across the underlay. It should be noted that stateful tunneling, especially when configuration is involved, does impose management overhead and scale constraints. When confidentiality is required, the use of opportunistic security [OPPSEC] can be used as a stateless tunneling solution.

可以使用不同的IP隧道选项(例如,通用路由封装(GRE)、第2层隧道协议(L2TP)和IPsec)和MPLS隧道。隧道可以是无状态的或有状态的。无状态隧道只是将租户数据包封装在另一个标头中,该标头是跨参考底图转发数据包所必需的(例如,IP参考底图上的IP隧道)。另一方面,有状态隧道需要在隧道端点(即NVE)保持隧道状态。然后,通过使用相应的隧道报头封装包,可以通过这样的隧道将入口NVE上的租户包传输到目的地(出口)NVE。可以配置或动态建立端点处的隧道状态。解决方案应该指定使用的隧道技术,以及它是有状态的还是无状态的。然而,在本文档中,隧道和隧道封装可交换地用于简单地表示租户分组的封装,该租户分组具有穿过底层在入口NVE和出口NVE之间承载分组所必需的隧道报头。应该注意的是,有状态隧道,特别是在涉及配置时,确实会带来管理开销和规模限制。当需要保密时,机会主义安全[OPPSEC]的使用可以用作无状态隧道解决方案。

3.1.5. Control-Plane Components
3.1.5. 控制平面组件
3.1.5.1. Distributed vs. Centralized Control Plane
3.1.5.1. 分布式与集中式控制平面

Control- and management-plane entities can be centralized or distributed. Both approaches have been used extensively in the past. The routing model of the Internet is a good example of a distributed approach. Transport networks have usually used a centralized approach to manage transport paths.

控制和管理平面实体可以是集中式的,也可以是分布式的。这两种方法在过去都被广泛使用。因特网的路由模型是分布式方法的一个很好的例子。传输网络通常使用集中的方法来管理传输路径。

It is also possible to combine the two approaches, i.e., using a hybrid model. A global view of network state can have many benefits, but it does not preclude the use of distributed protocols within the network. Centralized models provide a facility to maintain global state and distribute that state to the network. When used in combination with distributed protocols, greater network efficiencies, improved reliability, and robustness can be achieved. Domain- and/or deployment-specific constraints define the balance between centralized and distributed approaches.

也可以将这两种方法结合起来,即使用混合模型。网络状态的全局视图有很多好处,但它并不排除在网络中使用分布式协议。集中式模型提供了维护全局状态并将该状态分发到网络的工具。当与分布式协议结合使用时,可以实现更高的网络效率、更高的可靠性和健壮性。特定于域和/或部署的约束定义了集中式方法和分布式方法之间的平衡。

3.1.5.2. Auto-provisioning and Service Discovery
3.1.5.2. 自动资源调配和服务发现

NVEs must be able to identify the appropriate VNI for each Tenant System. This is based on state information that is often provided by external entities. For example, in an environment where a VM is a Tenant System, this information is provided by VM orchestration systems, since these are the only entities that have visibility of which VM belongs to which tenant.

NVE必须能够为每个租户系统识别适当的VNI。这是基于通常由外部实体提供的状态信息。例如,在虚拟机是租户系统的环境中,此信息由虚拟机编排系统提供,因为只有这些实体可以看到哪个虚拟机属于哪个租户。

A mechanism for communicating this information to the NVE is required. VAPs have to be created and mapped to the appropriate VNI. Depending upon the implementation, this control interface can be implemented using an auto-discovery protocol between Tenant Systems and their local NVE or through management entities. In either case, appropriate security and authentication mechanisms to verify that Tenant System information is not spoofed or altered are required. This is one critical aspect for providing integrity and tenant isolation in the system.

需要一种将该信息传达给NVE的机制。必须创建VAP并将其映射到相应的VNI。根据实施情况,可以使用租户系统与其本地NVE之间的自动发现协议或通过管理实体实现此控制接口。在这两种情况下,都需要适当的安全和身份验证机制来验证租户系统信息是否被欺骗或更改。这是在系统中提供完整性和租户隔离的一个关键方面。

NVEs may learn reachability information for VNIs on other NVEs via a control protocol that exchanges such information among NVEs or via a management-control entity.

NVE可通过在NVE之间交换此类信息的控制协议或通过管理控制实体为其他NVE上的VNI学习可达性信息。

3.1.5.3. Address Advertisement and Tunnel Mapping
3.1.5.3. 地址广告和隧道映射

As traffic reaches an ingress NVE on a VAP, a lookup is performed to determine which NVE or local VAP the packet needs to be sent to. If the packet is to be sent to another NVE, the packet is encapsulated with a tunnel header containing the destination information (destination IP address or MPLS label) of the egress NVE. Intermediate nodes (between the ingress and egress NVEs) switch or route traffic based upon the tunnel destination information.

当流量到达VAP上的入口NVE时,执行查找以确定数据包需要发送到哪个NVE或本地VAP。如果要将该分组发送到另一个NVE,则用包含出口NVE的目的地信息(目的地IP地址或MPLS标签)的隧道报头来封装该分组。中间节点(入口和出口NVE之间)根据隧道目的地信息交换或路由流量。

A key step in the above process consists of identifying the destination NVE the packet is to be tunneled to. NVEs are responsible for maintaining a set of forwarding or mapping tables that hold the bindings between destination VM and egress NVE addresses. Several ways of populating these tables are possible: control plane driven, management plane driven, or data plane driven.

上述过程中的一个关键步骤包括识别数据包要通过隧道传输到的目的地NVE。NVE负责维护一组转发或映射表,这些表保存目标VM和出口NVE地址之间的绑定。填充这些表的几种方法是可能的:控制平面驱动、管理平面驱动或数据平面驱动。

When a control-plane protocol is used to distribute address reachability and tunneling information, the auto-provisioning and service discovery could be accomplished by the same protocol. In this scenario, the auto-provisioning and service discovery could be combined with (be inferred from) the address advertisement and associated tunnel mapping. Furthermore, a control-plane protocol

当使用控制平面协议分发地址可达性和隧道信息时,自动供应和服务发现可以通过相同的协议完成。在这个场景中,自动供应和服务发现可以与地址公告和相关的隧道映射相结合(可以从中推断)。此外,还提出了一种控制平面协议

that carries both MAC and IP addresses eliminates the need for the Address Resolution Protocol (ARP) and hence addresses one of the issues with explosive ARP handling as discussed in [RFC6820].

同时承载MAC和IP地址的协议消除了对地址解析协议(ARP)的需要,因此解决了[RFC6820]中讨论的ARP爆炸处理问题之一。

3.1.5.4. Overlay Tunneling
3.1.5.4. 覆岩隧道

For overlay tunneling, and dependent upon the tunneling technology used for encapsulating the Tenant System packets, it may be sufficient to have one or more local NVE addresses assigned and used in the source and destination fields of a tunneling encapsulation header. Other information that is part of the tunneling encapsulation header may also need to be configured. In certain cases, local NVE configuration may be sufficient while in other cases, some tunneling-related information may need to be shared among NVEs. The information that needs to be shared will be technology dependent. For instance, potential information could include tunnel identity, encapsulation type, and/or tunnel resources. In certain cases, such as when using IP multicast in the underlay, tunnels that interconnect NVEs may need to be established. When tunneling information needs to be exchanged or shared among NVEs, a control-plane protocol may be required. For instance, it may be necessary to provide active/standby status information between NVEs, up/down status information, pruning/grafting information for multicast tunnels, etc.

对于覆盖隧道,并且取决于用于封装租户系统分组的隧道技术,在隧道封装报头的源字段和目的字段中分配和使用一个或多个本地NVE地址就足够了。作为隧道封装头一部分的其他信息也可能需要配置。在某些情况下,本地NVE配置可能足够,而在其他情况下,可能需要在NVE之间共享一些隧道相关信息。需要共享的信息将取决于技术。例如,潜在信息可能包括隧道标识、封装类型和/或隧道资源。在某些情况下,例如在底层使用IP多播时,可能需要建立连接NVE的隧道。当需要在NVE之间交换或共享隧道信息时,可能需要控制平面协议。例如,可能需要提供NVE之间的主动/备用状态信息、上行/下行状态信息、多播隧道的修剪/嫁接信息等。

In addition, a control plane may be required to set up the tunnel path for some tunneling technologies. This applies to both unicast and multicast tunneling.

此外,可能需要一个控制平面来设置某些隧道技术的隧道路径。这适用于单播和多播隧道。

3.2. Multihoming
3.2. 多归宿

Multihoming techniques can be used to increase the reliability of an NVO3 network. It is also important to ensure that the physical diversity in an NVO3 network is taken into account to avoid single points of failure.

多址技术可用于提高NVO3网络的可靠性。同样重要的是,确保考虑NVO3网络中的物理多样性,以避免单点故障。

Multihoming can be enabled in various nodes, from Tenant Systems into ToRs, ToRs into core switches/routers, and core nodes into DC GWs.

可以在各种节点中启用多宿主,从租户系统到TOR,TOR到核心交换机/路由器,核心节点到DC GWs。

The NVO3 underlay nodes (i.e., from NVEs to DC GWs) rely on IP routing techniques or MPLS re-rerouting capabilities as the means to re-route traffic upon failures.

NVO3底层节点(即从NVE到DC GWs)依赖IP路由技术或MPLS重新路由功能,作为故障时重新路由流量的手段。

When a Tenant System is co-located with the NVE, the Tenant System is effectively single-homed to the NVE via a virtual port. When the Tenant System and the NVE are separated, the Tenant System is connected to the NVE via a logical L2 construct such as a VLAN, and it can be multihomed to various NVEs. An NVE may provide an L2

当租户系统与NVE位于同一位置时,租户系统将通过虚拟端口有效地单独驻留到NVE。当租户系统和NVE分离时,租户系统通过逻辑L2结构(如VLAN)连接到NVE,并且可以多宿到各种NVE。NVE可以提供L2

service to the end system or an l3 service. An NVE may be multihomed to a next layer in the DC at L2 or L3. When an NVE provides an L2 service and is not co-located with the end system, loop-avoidance techniques must be used. Similarly, when the NVE provides L3 service, similar dual-homing techniques can be used. When the NVE provides an L3 service to the end system, it is possible that no dynamic routing protocol is enabled between the end system and the NVE. The end system can be multihomed to multiple physically separated L3 NVEs over multiple interfaces. When one of the links connected to an NVE fails, the other interfaces can be used to reach the end system.

服务到终端系统或l3服务。NVE可以在L2或L3处多址到DC中的下一层。当NVE提供L2服务且不与终端系统位于同一位置时,必须使用环路避免技术。类似地,当NVE提供L3服务时,可以使用类似的双归宿技术。当NVE向终端系统提供L3服务时,终端系统和NVE之间可能没有启用动态路由协议。终端系统可以通过多个接口与多个物理分离的L3 NVE进行多宿连接。当连接到NVE的一条链路出现故障时,可以使用其他接口连接到终端系统。

External connectivity from a DC can be handled by two or more DC gateways. Each gateway provides access to external networks such as VPNs or the Internet. A gateway may be connected to two or more edge nodes in the external network for redundancy. When a connection to an upstream node is lost, the alternative connection is used, and the failed route withdrawn.

来自DC的外部连接可由两个或多个DC网关处理。每个网关都提供对外部网络(如VPN或Internet)的访问。网关可以连接到外部网络中的两个或多个边缘节点以实现冗余。当与上游节点的连接丢失时,将使用替代连接,并退出失败的路由。

3.3. VM Mobility
3.3. 虚拟机移动性

In DC environments utilizing VM technologies, an important feature is that VMs can move from one server to another server in the same or different L2 physical domains (within or across DCs) in a seamless manner.

在使用虚拟机技术的DC环境中,一个重要特性是虚拟机可以无缝地从一台服务器移动到相同或不同L2物理域中的另一台服务器(在DC内或跨DC)。

A VM can be moved from one server to another in stopped or suspended state ("cold" VM mobility) or in running/active state ("hot" VM mobility). With "hot" mobility, VM L2 and L3 addresses need to be preserved. With "cold" mobility, it may be desired to preserve at least VM L3 addresses.

虚拟机可以在停止或挂起状态(“冷”虚拟机移动)或运行/活动状态(“热”虚拟机移动)下从一台服务器移动到另一台服务器。使用“热”移动,需要保留VM L2和L3地址。对于“冷”移动,可能需要至少保留VM L3地址。

Solutions to maintain connectivity while a VM is moved are necessary in the case of "hot" mobility. This implies that connectivity among VMs is preserved. For instance, for L2 VNs, ARP caches are updated accordingly.

在“热”移动的情况下,在移动VM时保持连接的解决方案是必要的。这意味着虚拟机之间的连接得以保持。例如,对于L2 VN,ARP缓存会相应地更新。

Upon VM mobility, NVE policies that define connectivity among VMs must be maintained.

在VM移动时,必须维护定义VM之间连接的NVE策略。

During VM mobility, it is expected that the path to the VM's default gateway assures adequate QoS to VM applications, i.e., QoS that matches the expected service-level agreement for these applications.

在虚拟机移动期间,预期到虚拟机的默认网关的路径将确保虚拟机应用程序具有足够的QoS,即与这些应用程序的预期服务级别协议相匹配的QoS。

4. Key Aspects of Overlay Networks
4. 覆盖网络的关键方面

The intent of this section is to highlight specific issues that proposed overlay solutions need to address.

本节的目的是强调建议的覆盖解决方案需要解决的具体问题。

4.1. Pros and Cons
4.1. 利弊

An overlay network is a layer of virtual network topology on top of the physical network.

覆盖网络是物理网络之上的一层虚拟网络拓扑。

Overlay networks offer the following key advantages:

覆盖网络具有以下主要优势:

- Unicast tunneling state management and association of Tenant Systems reachability are handled at the edge of the network (at the NVE). Intermediate transport nodes are unaware of such state. Note that when multicast is enabled in the underlay network to build multicast trees for tenant VNs, there would be more state related to tenants in the underlay core network.

- 单播隧道状态管理和租户系统关联可达性在网络边缘(在NVE)处理。中间传输节点不知道这种状态。请注意,当在参考底图网络中启用多播以为租户VN构建多播树时,参考底图核心网络中将有更多与租户相关的状态。

- Tunneling is used to aggregate traffic and hide tenant addresses from the underlay network and hence offers the advantage of minimizing the amount of forwarding state required within the underlay network.

- 隧道用于聚合通信量并从参考底图网络隐藏租户地址,因此具有将参考底图网络内所需的转发状态量降至最低的优点。

- Decoupling of the overlay addresses (MAC and IP) used by VMs from the underlay network provides tenant separation and separation of the tenant address spaces from the underlay address space.

- 虚拟机使用的覆盖地址(MAC和IP)与参考底图网络的分离提供了租户分离以及租户地址空间与参考底图地址空间的分离。

- Overlay networks support of a large number of virtual network identifiers.

- 覆盖网络支持大量虚拟网络标识符。

Overlay networks also create several challenges:

覆盖网络也带来了一些挑战:

- Overlay networks typically have no control of underlay networks and lack underlay network information (e.g., underlay utilization):

- 覆盖网络通常无法控制参考底图网络,并且缺少参考底图网络信息(例如,参考底图利用率):

o Overlay networks and/or their associated management entities typically probe the network to measure link or path properties, such as available bandwidth or packet loss rate. It is difficult to accurately evaluate network properties. It might be preferable for the underlay network to expose usage and performance information.

o 覆盖网络和/或其相关管理实体通常探测网络以测量链路或路径属性,例如可用带宽或分组丢失率。准确评估网络性能是一个难点。参考底图网络最好公开使用情况和性能信息。

o Miscommunication or lack of coordination between overlay and underlay networks can lead to an inefficient usage of network resources.

o 重叠网络和底层网络之间的通信错误或缺乏协调会导致网络资源的低效使用。

o When multiple overlays co-exist on top of a common underlay network, the lack of coordination between overlays can lead to performance issues and/or resource usage inefficiencies.

o 当多个覆盖层共存于公共参考底图网络之上时,覆盖层之间缺乏协调可能会导致性能问题和/或资源使用效率低下。

- Traffic carried over an overlay might fail to traverse firewalls and NAT devices.

- 覆盖层上的流量可能无法穿越防火墙和NAT设备。

- Multicast service scalability: Multicast support may be required in the underlay network to address tenant flood containment or efficient multicast handling. The underlay may also be required to maintain multicast state on a per-tenant basis or even on a per-individual multicast flow of a given tenant. Ingress replication at the NVE eliminates that additional multicast state in the underlay core, but depending on the multicast traffic volume, it may cause inefficient use of bandwidth.

- 多播服务可扩展性:在底层网络中可能需要多播支持,以解决租户洪水控制或高效多播处理问题。还可能需要参考底图来维护每个租户的多播状态,甚至维护给定租户的每个单独多播流的多播状态。NVE的入口复制消除了底层核心中的额外多播状态,但取决于多播通信量,它可能会导致带宽使用效率低下。

4.2. Overlay Issues to Consider
4.2. 考虑覆盖问题
4.2.1. Data Plane vs. Control Plane Driven
4.2.1. 数据平面与控制平面驱动

In the case of an L2 NVE, it is possible to dynamically learn MAC addresses against VAPs. It is also possible that such addresses be known and controlled via management or a control protocol for both L2 NVEs and L3 NVEs. Dynamic data-plane learning implies that flooding of unknown destinations be supported and hence implies that broadcast and/or multicast be supported or that ingress replication be used as described in Section 4.2.3. Multicasting in the underlay network for dynamic learning may lead to significant scalability limitations. Specific forwarding rules must be enforced to prevent loops from happening. This can be achieved using a spanning tree, a shortest path tree, or a split-horizon mesh.

在L2 NVE的情况下,可以根据VAP动态学习MAC地址。也可能通过L2 NVE和L3 NVE的管理或控制协议来知道和控制这些地址。动态数据平面学习意味着支持未知目的地的泛洪,因此意味着支持广播和/或多播,或者如第4.2.3节所述使用入口复制。用于动态学习的底层网络中的多播可能会导致显著的可扩展性限制。必须强制执行特定的转发规则以防止循环发生。这可以通过使用生成树、最短路径树或分割地平线网格来实现。

It should be noted that the amount of state to be distributed is dependent upon network topology and the number of virtual machines. Different forms of caching can also be utilized to minimize state distribution between the various elements. The control plane should not require an NVE to maintain the locations of all the Tenant Systems whose VNs are not present on the NVE. The use of a control plane does not imply that the data plane on NVEs has to maintain all the forwarding state in the control plane.

应该注意,要分布的状态量取决于网络拓扑和虚拟机的数量。还可以使用不同形式的缓存来最小化不同元素之间的状态分布。控制平面不应要求NVE维护其VN不在NVE上的所有租户系统的位置。使用控制平面并不意味着NVEs上的数据平面必须保持控制平面中的所有转发状态。

4.2.2. Coordination between Data Plane and Control Plane
4.2.2. 数据平面和控制平面之间的协调

For an L2 NVE, the NVE needs to be able to determine MAC addresses of the Tenant Systems connected via a VAP. This can be achieved via data-plane learning or a control plane. For an L3 NVE, the NVE needs to be able to determine the IP addresses of the Tenant Systems connected via a VAP.

对于L2 NVE,NVE需要能够确定通过VAP连接的租户系统的MAC地址。这可以通过数据平面学习或控制平面来实现。对于L3 NVE,NVE需要能够确定通过VAP连接的租户系统的IP地址。

In both cases, coordination with the NVE control protocol is needed such that when the NVE determines that the set of addresses behind a VAP has changed, it triggers the NVE control plane to distribute this information to its peers.

在这两种情况下,都需要与NVE控制协议进行协调,以便当NVE确定VAP后面的地址集已更改时,它触发NVE控制平面将此信息分发给其对等方。

4.2.3. Handling Broadcast, Unknown Unicast, and Multicast (BUM) Traffic
4.2.3. 处理广播、未知单播和多播(BUM)流量

There are several options to support packet replication needed for broadcast, unknown unicast, and multicast. Typical methods include:

有几个选项支持广播、未知单播和多播所需的数据包复制。典型方法包括:

- Ingress replication

- 入口复制

- Use of underlay multicast trees

- 参考底图多播树的使用

There is a bandwidth vs. state trade-off between the two approaches. Depending upon the degree of replication required (i.e., the number of hosts per group) and the amount of multicast state to maintain, trading bandwidth for state should be considered.

这两种方法之间存在带宽与状态的权衡。根据所需的复制程度(即每个组的主机数量)和要维护的多播状态量,应考虑以带宽换取状态。

When the number of hosts per group is large, the use of underlay multicast trees may be more appropriate. When the number of hosts is small (e.g., 2-3) and/or the amount of multicast traffic is small, ingress replication may not be an issue.

当每个组的主机数较大时,使用参考底图多播树可能更合适。当主机数量较少(例如,2-3)和/或多播通信量较少时,入口复制可能不是问题。

Depending upon the size of the data center network and hence the number of (S,G) entries, and also the duration of multicast flows, the use of underlay multicast trees can be a challenge.

根据数据中心网络的大小和(S,G)条目的数量以及多播流的持续时间,使用底层多播树可能是一个挑战。

When flows are well known, it is possible to pre-provision such multicast trees. However, it is often difficult to predict application flows ahead of time; hence, programming of (S,G) entries for short-lived flows could be impractical.

当已知流时,可以预先提供此类多播树。然而,通常很难提前预测应用程序流;因此,为短期流编程(S,G)条目可能是不切实际的。

A possible trade-off is to use in the underlay shared multicast trees as opposed to dedicated multicast trees.

一种可能的折衷方法是在参考底图共享多播树中使用,而不是在专用多播树中使用。

4.2.4. Path MTU
4.2.4. 路径MTU

When using overlay tunneling, an outer header is added to the original frame. This can cause the MTU of the path to the egress tunnel endpoint to be exceeded.

使用覆盖隧道时,会将外部标头添加到原始帧。这可能导致超出出口隧道端点路径的MTU。

It is usually not desirable to rely on IP fragmentation for performance reasons. Ideally, the interface MTU as seen by a Tenant System is adjusted such that no fragmentation is needed.

出于性能原因,通常不希望依赖IP分段。理想情况下,调整租户系统看到的接口MTU,使其不需要碎片。

It is possible for the MTU to be configured manually or to be discovered dynamically. Various Path MTU discovery techniques exist in order to determine the proper MTU size to use:

可以手动配置MTU,也可以动态发现MTU。存在各种路径MTU发现技术,以确定要使用的正确MTU大小:

- Classical ICMP-based Path MTU Discovery [RFC1191] [RFC1981]

- 基于ICMP的经典路径MTU发现[RFC1191][RFC1981]

Tenant Systems rely on ICMP messages to discover the MTU of the end-to-end path to its destination. This method is not always possible, such as when traversing middleboxes (e.g., firewalls) that disable ICMP for security reasons.

租户系统依赖ICMP消息来发现到达其目的地的端到端路径的MTU。这种方法并不总是可行的,例如在穿越因安全原因禁用ICMP的中间盒(如防火墙)时。

- Extended Path MTU Discovery techniques such as those defined in [RFC4821]

- 扩展路径MTU发现技术,如[RFC4821]中定义的技术

Tenant Systems send probe packets of different sizes and rely on confirmation of receipt or lack thereof from receivers to allow a sender to discover the MTU of the end-to-end paths.

租户系统发送不同大小的探测数据包,并依赖于来自接收器的接收确认或缺少确认,以允许发送方发现端到端路径的MTU。

While it could also be possible to rely on the NVE to perform segmentation and reassembly operations without relying on the Tenant Systems to know about the end-to-end MTU, this would lead to undesired performance and congestion issues as well as significantly increase the complexity of hardware NVEs required for buffering and reassembly logic.

虽然也可以依赖NVE执行分段和重新组装操作,而不依赖租户系统了解端到端MTU,但这将导致不期望的性能和拥塞问题,并显著增加缓冲和重新组装逻辑所需的硬件NVE的复杂性。

Preferably, the underlay network should be designed in such a way that the MTU can accommodate the extra tunneling and possibly additional NVO3 header encapsulation overhead.

优选地,衬底网络的设计应使MTU能够容纳额外的隧道和可能额外的NVO3报头封装开销。

4.2.5. NVE Location Trade-Offs
4.2.5. 位置权衡

In the case of DC traffic, traffic originated from a VM is native Ethernet traffic. This traffic can be switched by a local virtual switch or ToR switch and then by a DC gateway. The NVE function can be embedded within any of these elements.

对于DC流量,来自VM的流量是本机以太网流量。该流量可以通过本地虚拟交换机或ToR交换机进行切换,然后通过DC网关进行切换。NVE函数可以嵌入这些元素中。

There are several criteria to consider when deciding where the NVE function should happen:

当决定NVE功能应该发生的地方时,有几个标准要考虑:

- Processing and memory requirements

- 处理和内存要求

o Datapath (e.g., lookups, filtering, and encapsulation/decapsulation)

o 数据路径(例如,查找、筛选和封装/去封装)

o Control-plane processing (e.g., routing, signaling, and OAM) and where specific control-plane functions should be enabled

o 控制平面处理(例如,路由、信令和OAM)以及应启用特定控制平面功能的位置

- FIB/RIB size

- 纤维/肋骨尺寸

- Multicast support

- 多播支持

o Routing/signaling protocols

o 路由/信令协议

o Packet replication capability

o 数据包复制能力

o Multicast FIB

o 多播FIB

- Fragmentation support

- 碎片化支持

- QoS support (e.g., marking, policing, and queuing)

- QoS支持(例如标记、监管和排队)

- Resiliency

- 弹性

4.2.6. Interaction between Network Overlays and Underlays
4.2.6. 网络覆盖和参考底图之间的交互

When multiple overlays co-exist on top of a common underlay network, resources (e.g., bandwidth) should be provisioned to ensure that traffic from overlays can be accommodated and QoS objectives can be met. Overlays can have partially overlapping paths (nodes and links).

当多个覆盖层共存于一个公共底层网络之上时,应提供资源(例如带宽),以确保覆盖层的通信量能够得到满足,并满足QoS目标。覆盖可以具有部分重叠的路径(节点和链接)。

Each overlay is selfish by nature. It sends traffic so as to optimize its own performance without considering the impact on other overlays, unless the underlay paths are traffic engineered on a per-overlay basis to avoid congestion of underlay resources.

每一个重叠都是自私的。它发送流量,以便在不考虑对其他覆盖的影响的情况下优化其自身的性能,除非参考底图路径是在每个覆盖的基础上设计的流量,以避免参考底图资源的拥塞。

Better visibility between overlays and underlays, or general coordination in placing overlay demands on an underlay network, may be achieved by providing mechanisms to exchange performance and liveliness information between the underlay and overlay(s) or by the use of such information by a coordination system. Such information may include:

通过提供机制在参考底图和参考底图之间交换性能和生动性信息,或通过协调系统使用此类信息,可以实现覆盖图和参考底图之间更好的可见性,或在参考底图网络上放置覆盖要求的一般协调。此类信息可能包括:

- Performance metrics (throughput, delay, loss, jitter) such as defined in [RFC3148], [RFC2679], [RFC2680], and [RFC3393].

- [RFC3148]、[RFC2679]、[RFC2680]和[RFC3393]中定义的性能指标(吞吐量、延迟、损耗、抖动)。

- Cost metrics

- 成本指标

5. Security Considerations
5. 安全考虑

There are three points of view when considering security for NVO3. First, the service offered by a service provider via NVO3 technology to a tenant must meet the mutually agreed security requirements. Second, a network implementing NVO3 must be able to trust the virtual network identity associated with packets received from a tenant. Third, an NVO3 network must consider the security associated with running as an overlay across the underlay network.

在考虑NVO3的安全性时,有三种观点。首先,服务提供商通过NVO3技术向租户提供的服务必须满足双方商定的安全要求。其次,实现NVO3的网络必须能够信任与从租户接收的数据包相关联的虚拟网络标识。第三,NVO3网络必须考虑与作为底层网络覆盖的运行相关的安全性。

To meet a tenant's security requirements, the NVO3 service must deliver packets from the tenant to the indicated destination(s) in the overlay network and external networks. The NVO3 service provides data confidentiality through data separation. The use of both VNIs and tunneling of tenant traffic by NVEs ensures that NVO3 data is kept in a separate context and thus separated from other tenant traffic. The infrastructure supporting an NVO3 service (e.g., management systems, NVEs, NVAs, and intermediate underlay networks) should be limited to authorized access so that data integrity can be expected. If a tenant requires that its data be confidential, then the Tenant System may choose to encrypt its data before transmission into the NVO3 service.

为了满足租户的安全要求,NVO3服务必须将数据包从租户传送到覆盖网络和外部网络中指定的目的地。NVO3服务通过数据分离提供数据保密性。NVE同时使用VNI和租户流量隧道,确保NVO3数据保存在单独的上下文中,从而与其他租户流量分离。支持NVO3服务的基础设施(例如,管理系统、NVS、NVA和中间底层网络)应限于授权访问,以便可以预期数据完整性。如果租户要求其数据保密,那么租户系统可以选择在传输到NVO3服务之前加密其数据。

An NVO3 service must be able to verify the VNI received on a packet from the tenant. To ensure this, not only tenant data but also NVO3 control data must be secured (e.g., control traffic between NVAs and NVEs, between NVAs, and between NVEs). Since NVEs and NVAs play a central role in NVO3, it is critical that secure access to NVEs and NVAs be ensured such that no unauthorized access is possible. As discussed in Section 3.1.5.2, identification of Tenant Systems is based upon state that is often provided by management systems (e.g., a VM orchestration system in a virtualized environment). Secure access to such management systems must also be ensured. When an NVE receives data from a Tenant System, the tenant identity needs to be verified in order to guarantee that it is authorized to access the corresponding VN. This can be achieved by identifying incoming packets against specific VAPs in some cases. In other circumstances, authentication may be necessary. Once this verification is done, the packet is allowed into the NVO3 overlay, and no integrity protection is provided on the overlay packet encapsulation (e.g., the VNI, destination NVE, etc.).

NVO3服务必须能够验证从租户收到的数据包上的VNI。为确保这一点,不仅必须保护租户数据,还必须保护NVO3控制数据(例如,NVA和NVS之间、NVA之间以及NVS之间的控制流量)。由于NVE和NVA在NVO3中起着核心作用,因此确保对NVE和NVA的安全访问至关重要,这样就不可能进行未经授权的访问。如第3.1.5.2节所述,租户系统的标识基于管理系统(例如,虚拟化环境中的VM编排系统)通常提供的状态。还必须确保安全访问此类管理系统。当NVE从租户系统接收数据时,需要验证租户身份,以确保其有权访问相应的VN。这可以通过在某些情况下根据特定VAP识别传入数据包来实现。在其他情况下,可能需要进行身份验证。验证完成后,数据包被允许进入NVO3覆盖层,覆盖层数据包封装(例如,VNI、目的地NVE等)上不提供完整性保护。

Since an NVO3 service can run across diverse underlay networks, when the underlay network is not trusted to provide at least data integrity, data encryption is needed to assure correct packet delivery.

由于NVO3服务可以在不同的参考底图网络上运行,因此当参考底图网络无法提供至少数据完整性时,需要进行数据加密以确保正确的数据包传递。

It is also desirable to restrict the types of information (e.g., topology information as discussed in Section 4.2.6) that can be exchanged between an NVO3 service and underlay networks based upon their agreed security requirements.

此外,还应根据NVO3服务和底层网络商定的安全要求,限制可在NVO3服务和底层网络之间交换的信息类型(例如,第4.2.6节中讨论的拓扑信息)。

6. Informative References
6. 资料性引用

[EVPN] Sajassi, A., Aggarwal, R., Bitar, N., Isaac, A., and J. Uttaro, "BGP MPLS Based Ethernet VPN", Work in Progress, draft-ietf-l2vpn-evpn-10, October 2014.

[EVPN]Sajassi,A.,Aggarwal,R.,Bitar,N.,Isaac,A.,和J.Uttaro,“基于BGP MPLS的以太网VPN”,正在进行的工作,草案-ietf-l2vpn-EVPN-10,2014年10月。

[OPPSEC] Dukhovni, V. "Opportunistic Security: Some Protection Most of the Time", Work in Progress, draft-dukhovni-opportunistic-security-04, August 2014.

[OPPSEC]Dukhovni,V.“机会主义安全:大部分时间的一些保护”,正在进行的工作,草稿-Dukhovni-Opportunitic-Security-042014年8月。

[RFC1191] Mogul, J. and S. Deering, "Path MTU discovery", RFC 1191, November 1990, <http://www.rfc-editor.org/info/rfc1191>.

[RFC1191]Mogul,J.和S.Deering,“MTU发现路径”,RFC1191,1990年11月<http://www.rfc-editor.org/info/rfc1191>.

[RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery for IP version 6", RFC 1981, August 1996, <http://www.rfc-editor.org/info/rfc1981>.

[RFC1981]McCann,J.,Deering,S.,和J.Mogul,“IP版本6的路径MTU发现”,RFC 1981,1996年8月<http://www.rfc-editor.org/info/rfc1981>.

[RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way Delay Metric for IPPM", RFC 2679, September 1999, <http://www.rfc-editor.org/info/rfc2679>.

[RFC2679]Almes,G.,Kalidini,S.,和M.Zekauskas,“IPPM的单向延迟度量”,RFC 2679,1999年9月<http://www.rfc-editor.org/info/rfc2679>.

[RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way Packet Loss Metric for IPPM", RFC 2680, September 1999, <http://www.rfc-editor.org/info/rfc2680>.

[RFC2680]Almes,G.,Kalidini,S.,和M.Zekauskas,“IPPM的单向数据包丢失度量”,RFC 26801999年9月<http://www.rfc-editor.org/info/rfc2680>.

[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining Empirical Bulk Transfer Capacity Metrics", RFC 3148, July 2001, <http://www.rfc-editor.org/info/rfc3148>.

[RFC3148]Mathis,M.和M.Allman,“定义经验批量传输容量指标的框架”,RFC 3148,2001年7月<http://www.rfc-editor.org/info/rfc3148>.

[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation Metric for IP Performance Metrics (IPPM)", RFC 3393, November 2002, <http://www.rfc-editor.org/info/rfc3393>.

[RFC3393]Demichelis,C.和P.Chimento,“IP性能度量的IP数据包延迟变化度量(IPPM)”,RFC 3393,2002年11月<http://www.rfc-editor.org/info/rfc3393>.

[RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private Networks (VPNs)", RFC 4364, February 2006, <http://www.rfc-editor.org/info/rfc4364>.

[RFC4364]Rosen,E.和Y.Rekhter,“BGP/MPLS IP虚拟专用网络(VPN)”,RFC 4364,2006年2月<http://www.rfc-editor.org/info/rfc4364>.

[RFC4761] Kompella, K., Ed., and Y. Rekhter, Ed., "Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling", RFC 4761, January 2007, <http://www.rfc-editor.org/info/rfc4761>.

[RFC4761]Kompella,K.,Ed.,和Y.Rekhter,Ed.,“使用BGP进行自动发现和信令的虚拟专用LAN服务(VPLS)”,RFC 4761,2007年1月<http://www.rfc-editor.org/info/rfc4761>.

[RFC4762] Lasserre, M., Ed., and V. Kompella, Ed., "Virtual Private LAN Service (VPLS) Using Label Distribution Protocol (LDP) Signaling", RFC 4762, January 2007, <http://www.rfc-editor.org/info/rfc4762>.

[RFC4762]Lasserre,M.,Ed.,和V.Kompella,Ed.,“使用标签分发协议(LDP)信令的虚拟专用LAN服务(VPLS)”,RFC 4762,2007年1月<http://www.rfc-editor.org/info/rfc4762>.

[RFC4821] Mathis, M. and J. Heffner, "Packetization Layer Path MTU Discovery", RFC 4821, March 2007, <http://www.rfc-editor.org/info/rfc4821>.

[RFC4821]Mathis,M.和J.Heffner,“打包层路径MTU发现”,RFC 48212007年3月<http://www.rfc-editor.org/info/rfc4821>.

[RFC6820] Narten, T., Karir, M., and I. Foo, "Address Resolution Problems in Large Data Center Networks", RFC 6820, January 2013, <http://www.rfc-editor.org/info/rfc6820>.

[RFC6820]Narten,T.,Karir,M.,和I.Foo,“解决大型数据中心网络中的解决方案问题”,RFC 6820,2013年1月<http://www.rfc-editor.org/info/rfc6820>.

[RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., Kreeger, L., and M. Napierala, "Problem Statement: Overlays for Network Virtualization", RFC 7364, October 2014, <http://www.rfc-editor.org/info/rfc7364>.

[RFC7364]Narten,T.,Ed.,Gray,E.,Ed.,Black,D.,Fang,L.,Kreeger,L.,和M.Napierala,“问题陈述:网络虚拟化覆盖”,RFC 7364,2014年10月<http://www.rfc-editor.org/info/rfc7364>.

Acknowledgments

致谢

In addition to the authors, the following people contributed to this document: Dimitrios Stiliadis, Rotem Salomonovitch, Lucy Yong, Thomas Narten, Larry Kreeger, and David Black.

除了作者之外,以下人士也对本文件做出了贡献:迪米特里奥斯·斯蒂里亚迪斯、罗特姆·萨洛莫诺维奇、露西·杨、托马斯·纳滕、拉里·克雷格和大卫·布莱克。

Authors' Addresses

作者地址

Marc Lasserre Alcatel-Lucent EMail: marc.lasserre@alcatel-lucent.com

马克·拉塞尔·阿尔卡特·朗讯电子邮件:马克。lasserre@alcatel-朗讯网

Florin Balus Alcatel-Lucent 777 E. Middlefield Road Mountain View, CA 94043 United States EMail: florin.balus@alcatel-lucent.com

Florin Balus Alcatel-Lucent 777 E.米德尔菲尔德路山景城,加利福尼亚州94043美国电子邮件:Florin。balus@alcatel-朗讯网

Thomas Morin Orange EMail: thomas.morin@orange.com

托马斯·莫林橙色电子邮件:托马斯。morin@orange.com

Nabil Bitar Verizon 50 Sylvan Road Waltham, MA 02145 United States EMail: nabil.n.bitar@verizon.com

Nabil Bitar Verizon 50 Sylvan Road Waltham,马萨诸塞州02145美国电子邮件:Nabil.n。bitar@verizon.com

Yakov Rekhter Juniper EMail: yakov@juniper.net

Yakov Rekhter Juniper电子邮件:yakov@juniper.net