Internet Engineering Task Force (IETF)                          D. Black
Request for Comments: 8014                                      Dell EMC
Category: Informational                                        J. Hudson
ISSN: 2070-1721                                               L. Kreeger
                                                             M. Lasserre
                                                             Independent
                                                               T. Narten
                                                                     IBM
                                                           December 2016
        
Internet Engineering Task Force (IETF)                          D. Black
Request for Comments: 8014                                      Dell EMC
Category: Informational                                        J. Hudson
ISSN: 2070-1721                                               L. Kreeger
                                                             M. Lasserre
                                                             Independent
                                                               T. Narten
                                                                     IBM
                                                           December 2016
        

An Architecture for Data-Center Network Virtualization over Layer 3 (NVO3)

第3层数据中心网络虚拟化体系结构(NVO3)

Abstract

摘要

This document presents a high-level overview architecture for building data-center Network Virtualization over Layer 3 (NVO3) networks. The architecture is given at a high level, showing the major components of an overall system. An important goal is to divide the space into individual smaller components that can be implemented independently with clear inter-component interfaces and interactions. It should be possible to build and implement individual components in isolation and have them interoperate with other independently implemented components. That way, implementers have flexibility in implementing individual components and can optimize and innovate within their respective components without requiring changes to other components.

本文档介绍了在第3层(NVO3)网络上构建数据中心网络虚拟化的高级概述体系结构。该体系结构在较高层次上给出,显示了整个系统的主要组件。一个重要的目标是将空间划分为单个较小的组件,这些组件可以通过清晰的组件间接口和交互独立实现。应该能够独立地构建和实现单个组件,并使它们与其他独立实现的组件互操作。这样,实现者就可以灵活地实现各个组件,并可以在各自的组件内进行优化和创新,而无需更改其他组件。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 7841.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 7841第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc8014.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc8014.

Copyright Notice

版权公告

Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2016 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
   2.  Terminology . . . . . . . . . . . . . . . . . . . . . . . . .   4
   3.  Background  . . . . . . . . . . . . . . . . . . . . . . . . .   5
     3.1.  VN Service (L2 and L3)  . . . . . . . . . . . . . . . . .   7
       3.1.1.  VLAN Tags in L2 Service . . . . . . . . . . . . . . .   8
       3.1.2.  Packet Lifetime Considerations  . . . . . . . . . . .   8
     3.2.  Network Virtualization Edge (NVE) Background  . . . . . .   9
     3.3.  Network Virtualization Authority (NVA) Background . . . .  10
     3.4.  VM Orchestration Systems  . . . . . . . . . . . . . . . .  11
   4.  Network Virtualization Edge (NVE) . . . . . . . . . . . . . .  12
     4.1.  NVE Co-located with Server Hypervisor . . . . . . . . . .  12
     4.2.  Split-NVE . . . . . . . . . . . . . . . . . . . . . . . .  13
       4.2.1.  Tenant VLAN Handling in Split-NVE Case  . . . . . . .  14
     4.3.  NVE State . . . . . . . . . . . . . . . . . . . . . . . .  14
     4.4.  Multihoming of NVEs . . . . . . . . . . . . . . . . . . .  15
     4.5.  Virtual Access Point (VAP)  . . . . . . . . . . . . . . .  16
   5.  Tenant System Types . . . . . . . . . . . . . . . . . . . . .  16
     5.1.  Overlay-Aware Network Service Appliances  . . . . . . . .  16
     5.2.  Bare Metal Servers  . . . . . . . . . . . . . . . . . . .  17
     5.3.  Gateways  . . . . . . . . . . . . . . . . . . . . . . . .  17
       5.3.1.  Gateway Taxonomy  . . . . . . . . . . . . . . . . . .  18
         5.3.1.1.  L2 Gateways (Bridging)  . . . . . . . . . . . . .  18
         5.3.1.2.  L3 Gateways (Only IP Packets) . . . . . . . . . .  18
     5.4.  Distributed Inter-VN Gateways . . . . . . . . . . . . . .  19
     5.5.  ARP and Neighbor Discovery  . . . . . . . . . . . . . . .  20
   6.  NVE-NVE Interaction . . . . . . . . . . . . . . . . . . . . .  20
   7.  Network Virtualization Authority (NVA)  . . . . . . . . . . .  21
     7.1.  How an NVA Obtains Information  . . . . . . . . . . . . .  21
     7.2.  Internal NVA Architecture . . . . . . . . . . . . . . . .  22
     7.3.  NVA External Interface  . . . . . . . . . . . . . . . . .  22
   8.  NVE-NVA Protocol  . . . . . . . . . . . . . . . . . . . . . .  24
     8.1.  NVE-NVA Interaction Models  . . . . . . . . . . . . . . .  24
     8.2.  Direct NVE-NVA Protocol . . . . . . . . . . . . . . . . .  25
     8.3.  Propagating Information Between NVEs and NVAs . . . . . .  25
   9.  Federated NVAs  . . . . . . . . . . . . . . . . . . . . . . .  26
     9.1.  Inter-NVA Peering . . . . . . . . . . . . . . . . . . . .  29
   10. Control Protocol Work Areas . . . . . . . . . . . . . . . . .  29
   11. NVO3 Data-Plane Encapsulation . . . . . . . . . . . . . . . .  29
   12. Operations, Administration, and Maintenance (OAM) . . . . . .  30
   13. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .  31
   14. Security Considerations . . . . . . . . . . . . . . . . . . .  31
   15. Informative References  . . . . . . . . . . . . . . . . . . .  32
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  34
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  35
        
   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
   2.  Terminology . . . . . . . . . . . . . . . . . . . . . . . . .   4
   3.  Background  . . . . . . . . . . . . . . . . . . . . . . . . .   5
     3.1.  VN Service (L2 and L3)  . . . . . . . . . . . . . . . . .   7
       3.1.1.  VLAN Tags in L2 Service . . . . . . . . . . . . . . .   8
       3.1.2.  Packet Lifetime Considerations  . . . . . . . . . . .   8
     3.2.  Network Virtualization Edge (NVE) Background  . . . . . .   9
     3.3.  Network Virtualization Authority (NVA) Background . . . .  10
     3.4.  VM Orchestration Systems  . . . . . . . . . . . . . . . .  11
   4.  Network Virtualization Edge (NVE) . . . . . . . . . . . . . .  12
     4.1.  NVE Co-located with Server Hypervisor . . . . . . . . . .  12
     4.2.  Split-NVE . . . . . . . . . . . . . . . . . . . . . . . .  13
       4.2.1.  Tenant VLAN Handling in Split-NVE Case  . . . . . . .  14
     4.3.  NVE State . . . . . . . . . . . . . . . . . . . . . . . .  14
     4.4.  Multihoming of NVEs . . . . . . . . . . . . . . . . . . .  15
     4.5.  Virtual Access Point (VAP)  . . . . . . . . . . . . . . .  16
   5.  Tenant System Types . . . . . . . . . . . . . . . . . . . . .  16
     5.1.  Overlay-Aware Network Service Appliances  . . . . . . . .  16
     5.2.  Bare Metal Servers  . . . . . . . . . . . . . . . . . . .  17
     5.3.  Gateways  . . . . . . . . . . . . . . . . . . . . . . . .  17
       5.3.1.  Gateway Taxonomy  . . . . . . . . . . . . . . . . . .  18
         5.3.1.1.  L2 Gateways (Bridging)  . . . . . . . . . . . . .  18
         5.3.1.2.  L3 Gateways (Only IP Packets) . . . . . . . . . .  18
     5.4.  Distributed Inter-VN Gateways . . . . . . . . . . . . . .  19
     5.5.  ARP and Neighbor Discovery  . . . . . . . . . . . . . . .  20
   6.  NVE-NVE Interaction . . . . . . . . . . . . . . . . . . . . .  20
   7.  Network Virtualization Authority (NVA)  . . . . . . . . . . .  21
     7.1.  How an NVA Obtains Information  . . . . . . . . . . . . .  21
     7.2.  Internal NVA Architecture . . . . . . . . . . . . . . . .  22
     7.3.  NVA External Interface  . . . . . . . . . . . . . . . . .  22
   8.  NVE-NVA Protocol  . . . . . . . . . . . . . . . . . . . . . .  24
     8.1.  NVE-NVA Interaction Models  . . . . . . . . . . . . . . .  24
     8.2.  Direct NVE-NVA Protocol . . . . . . . . . . . . . . . . .  25
     8.3.  Propagating Information Between NVEs and NVAs . . . . . .  25
   9.  Federated NVAs  . . . . . . . . . . . . . . . . . . . . . . .  26
     9.1.  Inter-NVA Peering . . . . . . . . . . . . . . . . . . . .  29
   10. Control Protocol Work Areas . . . . . . . . . . . . . . . . .  29
   11. NVO3 Data-Plane Encapsulation . . . . . . . . . . . . . . . .  29
   12. Operations, Administration, and Maintenance (OAM) . . . . . .  30
   13. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .  31
   14. Security Considerations . . . . . . . . . . . . . . . . . . .  31
   15. Informative References  . . . . . . . . . . . . . . . . . . .  32
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  34
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  35
        
1. Introduction
1. 介绍

This document presents a high-level architecture for building data-center Network Virtualization over Layer 3 (NVO3) networks. The architecture is given at a high level, which shows the major components of an overall system. An important goal is to divide the space into smaller individual components that can be implemented independently with clear inter-component interfaces and interactions. It should be possible to build and implement individual components in isolation and have them interoperate with other independently implemented components. That way, implementers have flexibility in implementing individual components and can optimize and innovate within their respective components without requiring changes to other components.

本文档介绍了在第3层(NVO3)网络上构建数据中心网络虚拟化的高级体系结构。该体系结构在较高的层次上给出,它显示了整个系统的主要组件。一个重要的目标是将空间划分为更小的单个组件,这些组件可以通过清晰的组件间接口和交互独立实现。应该能够独立地构建和实现单个组件,并使它们与其他独立实现的组件互操作。这样,实现者就可以灵活地实现各个组件,并可以在各自的组件内进行优化和创新,而无需更改其他组件。

The motivation for overlay networks is given in "Problem Statement: Overlays for Network Virtualization" [RFC7364]. "Framework for Data Center (DC) Network Virtualization" [RFC7365] provides a framework for discussing overlay networks generally and the various components that must work together in building such systems. This document differs from the framework document in that it doesn't attempt to cover all possible approaches within the general design space. Rather, it describes one particular approach that the NVO3 WG has focused on.

覆盖网络的动机在“问题陈述:网络虚拟化覆盖”[RFC7364]中给出。“数据中心(DC)网络虚拟化框架”[RFC7365]提供了一个框架,用于讨论覆盖网络以及在构建此类系统时必须协同工作的各种组件。本文件与框架文件的不同之处在于,它并未试图涵盖一般设计空间内的所有可能方法。相反,它描述了NVO3工作组关注的一种特殊方法。

2. Terminology
2. 术语

This document uses the same terminology as [RFC7365]. In addition, the following terms are used:

本文件使用与[RFC7365]相同的术语。此外,还使用了以下术语:

NV Domain: A Network Virtualization Domain is an administrative construct that defines a Network Virtualization Authority (NVA), the set of Network Virtualization Edges (NVEs) associated with that NVA, and the set of virtual networks the NVA manages and supports. NVEs are associated with a (logically centralized) NVA, and an NVE supports communication for any of the virtual networks in the domain.

NV域:网络虚拟化域是一种管理构造,它定义了网络虚拟化机构(NVA)、与该NVA关联的网络虚拟化边缘集(NVE)以及NVA管理和支持的虚拟网络集。NVE与(逻辑集中的)NVA关联,并且NVE支持域中任何虚拟网络的通信。

NV Region: A region over which information about a set of virtual networks is shared. The degenerate case of a single NV Domain corresponds to an NV Region corresponding to that domain. The more interesting case occurs when two or more NV Domains share information about part or all of a set of virtual networks that they manage. Two NVAs share information about particular virtual networks for the purpose of supporting connectivity between tenants located in different NV Domains. NVAs can share information about an entire NV Domain, or just individual virtual networks.

NV区域:共享一组虚拟网络信息的区域。单个NV域的退化情况对应于对应于该域的NV区域。当两个或多个NV域共享其管理的一组虚拟网络的部分或全部信息时,会出现更有趣的情况。两个NVA共享有关特定虚拟网络的信息,以支持位于不同NV域中的租户之间的连接。NVA可以共享有关整个NV域的信息,或仅共享单个虚拟网络的信息。

Tenant System Interface (TSI): The interface to a Virtual Network (VN) as presented to a Tenant System (TS, see [RFC7365]). The TSI logically connects to the NVE via a Virtual Access Point (VAP). To the Tenant System, the TSI is like a Network Interface Card (NIC); the TSI presents itself to a Tenant System as a normal network interface.

租户系统接口(TSI):呈现给租户系统的虚拟网络(VN)接口(TS,请参阅[RFC7365])。TSI通过虚拟接入点(VAP)与NVE进行逻辑连接。对于租户系统,TSI就像一个网络接口卡(NIC);TSI作为一个普通的网络接口呈现给租户系统。

VLAN: Unless stated otherwise, the terms "VLAN" and "VLAN Tag" are used in this document to denote a Customer VLAN (C-VLAN) [IEEE.802.1Q]; the terms are used interchangeably to improve readability.

VLAN:除非另有说明,否则本文件中使用的术语“VLAN”和“VLAN标签”表示客户VLAN(C-VLAN)[IEEE.802.1Q];这些术语可以互换使用,以提高可读性。

3. Background
3. 出身背景

Overlay networks are an approach for providing network virtualization services to a set of Tenant Systems (TSs) [RFC7365]. With overlays, data traffic between tenants is tunneled across the underlying data center's IP network. The use of tunnels provides a number of benefits by decoupling the network as viewed by tenants from the underlying physical network across which they communicate. Additional discussion of some NVO3 use cases can be found in [USECASES].

覆盖网络是一种向一组租户系统(TSs)提供网络虚拟化服务的方法[RFC7365]。通过覆盖,租户之间的数据通信通过底层数据中心的IP网络进行隧道传输。隧道的使用通过将租户所看到的网络与他们进行通信的底层物理网络分离,提供了许多好处。关于一些NVO3用例的更多讨论可在[用例]中找到。

Tenant Systems connect to Virtual Networks (VNs), with each VN having associated attributes defining properties of the network (such as the set of members that connect to it). Tenant Systems connected to a virtual network typically communicate freely with other Tenant Systems on the same VN, but communication between Tenant Systems on one VN and those external to the VN (whether on another VN or connected to the Internet) is carefully controlled and governed by policy. The NVO3 architecture does not impose any restrictions to the application of policy controls even within a VN.

租户系统连接到虚拟网络(VN),每个VN都具有定义网络属性的相关属性(例如连接到它的成员集)。连接到虚拟网络的租户系统通常与同一VN上的其他租户系统自由通信,但一个VN上的租户系统与VN外部的租户系统(无论是在另一个VN上还是连接到Internet)之间的通信由策略仔细控制和管理。即使在VN中,NVO3体系结构也不会对策略控制的应用施加任何限制。

A Network Virtualization Edge (NVE) [RFC7365] is the entity that implements the overlay functionality. An NVE resides at the boundary between a Tenant System and the overlay network as shown in Figure 1. An NVE creates and maintains local state about each VN for which it is providing service on behalf of a Tenant System.

网络虚拟化边缘(NVE)[RFC7365]是实现覆盖功能的实体。NVE位于租户系统和覆盖网络之间的边界处,如图1所示。NVE创建并维护其代表租户系统为其提供服务的每个VN的本地状态。

       +--------+                                             +--------+
       | Tenant +--+                                     +----| Tenant |
       | System |  |                                    (')   | System |
       +--------+  |          ................         (   )  +--------+
                   |  +-+--+  .              .  +--+-+  (_)
                   |  | NVE|--.              .--| NVE|   |
                   +--|    |  .              .  |    |---+
                      +-+--+  .              .   +--+-+
                      /       .              .
                     /        .  L3 Overlay  .   +--+-++--------+
       +--------+   /         .    Network   .   | NVE|| Tenant |
       | Tenant +--+          .              .- -|    || System |
       | System |             .              .   +--+-++--------+
       +--------+             ................
                                     |
                                   +----+
                                   | NVE|
                                   |    |
                                   +----+
                                     |
                                     |
                           =====================
                             |               |
                         +--------+      +--------+
                         | Tenant |      | Tenant |
                         | System |      | System |
                         +--------+      +--------+
        
       +--------+                                             +--------+
       | Tenant +--+                                     +----| Tenant |
       | System |  |                                    (')   | System |
       +--------+  |          ................         (   )  +--------+
                   |  +-+--+  .              .  +--+-+  (_)
                   |  | NVE|--.              .--| NVE|   |
                   +--|    |  .              .  |    |---+
                      +-+--+  .              .   +--+-+
                      /       .              .
                     /        .  L3 Overlay  .   +--+-++--------+
       +--------+   /         .    Network   .   | NVE|| Tenant |
       | Tenant +--+          .              .- -|    || System |
       | System |             .              .   +--+-++--------+
       +--------+             ................
                                     |
                                   +----+
                                   | NVE|
                                   |    |
                                   +----+
                                     |
                                     |
                           =====================
                             |               |
                         +--------+      +--------+
                         | Tenant |      | Tenant |
                         | System |      | System |
                         +--------+      +--------+
        

Figure 1: NVO3 Generic Reference Model

图1:NVO3通用参考模型

The following subsections describe key aspects of an overlay system in more detail. Section 3.1 describes the service model (Ethernet vs. IP) provided to Tenant Systems. Section 3.2 describes NVEs in more detail. Section 3.3 introduces the Network Virtualization Authority, from which NVEs obtain information about virtual networks. Section 3.4 provides background on Virtual Machine (VM) orchestration systems and their use of virtual networks.

以下小节将更详细地描述覆盖系统的关键方面。第3.1节描述了提供给租户系统的服务模型(以太网与IP)。第3.2节更详细地描述了NVE。第3.3节介绍了网络虚拟化机构,NVE可从该机构获取有关虚拟网络的信息。第3.4节提供了虚拟机(VM)编排系统及其虚拟网络使用的背景知识。

3.1. VN Service (L2 and L3)
3.1. VN服务(L2和L3)

A VN provides either Layer 2 (L2) or Layer 3 (L3) service to connected tenants. For L2 service, VNs transport Ethernet frames, and a Tenant System is provided with a service that is analogous to being connected to a specific L2 C-VLAN. L2 broadcast frames are generally delivered to all (and multicast frames delivered to a subset of) the other Tenant Systems on the VN. To a Tenant System, it appears as if they are connected to a regular L2 Ethernet link. Within the NVO3 architecture, tenant frames are tunneled to remote NVEs based on the Media Access Control (MAC) addresses of the frame headers as originated by the Tenant System. On the underlay, NVO3 packets are forwarded between NVEs based on the outer addresses of tunneled packets.

VN向连接的租户提供第2层(L2)或第3层(L3)服务。对于L2服务,VNs传输以太网帧,并且向租户系统提供类似于连接到特定L2 C-VLAN的服务。L2广播帧通常传送到VN上的所有其他租户系统(以及传送到VN上的一个子集的多播帧)。对于租户系统而言,它们似乎连接到常规的L2以太网链路。在NVO3体系结构中,根据租户系统发起的帧头的媒体访问控制(MAC)地址,租户帧通过隧道传输到远程NVE。在参考底图上,NVO3数据包根据隧道数据包的外部地址在NVE之间转发。

For L3 service, VNs are routed networks that transport IP datagrams, and a Tenant System is provided with a service that supports only IP traffic. Within the NVO3 architecture, tenant frames are tunneled to remote NVEs based on the IP addresses of the packet originated by the Tenant System; any L2 destination addresses provided by Tenant Systems are effectively ignored by the NVEs and overlay network. For L3 service, the Tenant System will be configured with an IP subnet that is effectively a point-to-point link, i.e., having only the Tenant System and a next-hop router address on it.

对于L3服务,VN是传输IP数据报的路由网络,租户系统提供了仅支持IP流量的服务。在NVO3体系结构中,根据租户系统发起的数据包的IP地址,租户帧通过隧道传输到远程NVE;租户系统提供的任何L2目的地地址都会被NVE和覆盖网络有效地忽略。对于L3服务,租户系统将配置一个IP子网,该子网实际上是一个点到点链路,即只有租户系统和下一跳路由器地址。

L2 service is intended for systems that need native L2 Ethernet service and the ability to run protocols directly over Ethernet (i.e., not based on IP). L3 service is intended for systems in which all the traffic can safely be assumed to be IP. It is important to note that whether or not an NVO3 network provides L2 or L3 service to a Tenant System, the Tenant System does not generally need to be aware of the distinction. In both cases, the virtual network presents itself to the Tenant System as an L2 Ethernet interface. An Ethernet interface is used in both cases simply as a widely supported interface type that essentially all Tenant Systems already support. Consequently, no special software is needed on Tenant Systems to use an L3 vs. an L2 overlay service.

L2服务适用于需要本机L2以太网服务和直接通过以太网(即,不基于IP)运行协议的系统。L3服务适用于所有流量都可以安全地假定为IP的系统。重要的是要注意,无论NVO3网络是否向租户系统提供L2或L3服务,租户系统通常不需要知道区别。在这两种情况下,虚拟网络向租户系统呈现为L2以太网接口。以太网接口在这两种情况下都只是作为一种广泛支持的接口类型使用,基本上所有租户系统都已经支持这种接口类型。因此,在租户系统上不需要特殊软件来使用L3与L2覆盖服务。

NVO3 can also provide a combined L2 and L3 service to tenants. A combined service provides L2 service for intra-VN communication but also provides L3 service for L3 traffic entering or leaving the VN. Architecturally, the handling of a combined L2/L3 service within the NVO3 architecture is intended to match what is commonly done today in non-overlay environments by devices providing a combined bridge/ router service. With combined service, the virtual network itself retains the semantics of L2 service, and all traffic is processed according to its L2 semantics. In addition, however, traffic requiring IP processing is also processed at the IP level.

NVO3还可以为租户提供二级和三级服务。组合服务为VN内通信提供L2服务,但也为进入或离开VN的L3通信提供L3服务。在体系结构上,NVO3体系结构中组合L2/L3服务的处理旨在通过提供组合网桥/路由器服务的设备来匹配目前在非覆盖环境中通常所做的工作。通过组合服务,虚拟网络本身保留了L2服务的语义,所有流量都按照其L2语义进行处理。然而,除此之外,需要IP处理的流量也在IP级别处理。

The IP processing for a combined service can be implemented on a standalone device attached to the virtual network (e.g., an IP router) or implemented locally on the NVE (see Section 5.4 on Distributed Inter-VN Gateways). For unicast traffic, NVE implementation of a combined service may result in a packet being delivered to another Tenant System attached to the same NVE (on either the same or a different VN), tunneled to a remote NVE, or even forwarded outside the NV Domain. For multicast or broadcast packets, the combination of NVE L2 and L3 processing may result in copies of the packet receiving both L2 and L3 treatments to realize delivery to all of the destinations involved. This distributed NVE implementation of IP routing results in the same network delivery behavior as if the L2 processing of the packet included delivery of the packet to an IP router attached to the L2 VN as a Tenant System, with the router having additional network attachments to other networks, either virtual or not.

组合服务的IP处理可以在连接到虚拟网络的独立设备(例如,IP路由器)上实现,也可以在NVE上本地实现(参见第5.4节分布式VN网关)。对于单播通信量,组合服务的NVE实现可能导致将数据包传送到附加到相同NVE(在相同或不同的VN上)的另一租户系统,通过隧道传送到远程NVE,甚至在NV域外转发。对于多播或广播分组,NVE L2和L3处理的组合可导致分组的副本同时接收L2和L3处理,以实现到所涉及的所有目的地的递送。IP路由的这种分布式NVE实现导致相同的网络交付行为,就好像包的L2处理包括将包交付到作为租户系统连接到L2 VN的IP路由器,而路由器具有附加到其他网络的网络连接,无论是虚拟的还是非虚拟的。

3.1.1. VLAN Tags in L2 Service
3.1.1. 二级服务中的VLAN标记

An NVO3 L2 virtual network service may include encapsulated L2 VLAN tags provided by a Tenant System but does not use encapsulated tags in deciding where and how to forward traffic. Such VLAN tags can be passed through so that Tenant Systems that send or expect to receive them can be supported as appropriate.

NVO3 L2虚拟网络服务可包括由租户系统提供的封装L2 VLAN标记,但不使用封装标记来决定在何处以及如何转发流量。这样的VLAN标记可以传递,这样发送或期望接收它们的租户系统就可以得到适当的支持。

The processing of VLAN tags that an NVE receives from a TS is controlled by settings associated with the VAP. Just as in the case with ports on Ethernet switches, a number of settings are possible. For example, Customer VLAN Tags (C-TAGs) can be passed through transparently, could always be stripped upon receipt from a Tenant System, could be compared against a list of explicitly configured tags, etc.

NVE从TS接收的VLAN标记的处理由与VAP相关联的设置控制。与以太网交换机上的端口一样,可以进行多种设置。例如,客户VLAN标记(C-标记)可以透明地传递,从租户系统接收时总是可以剥离,可以与显式配置的标记列表进行比较,等等。

Note that there are additional considerations when VLAN tags are used to identify both the VN and a Tenant System VLAN within that VN, as described in Section 4.2.1.

请注意,如第4.2.1节所述,当使用VLAN标记来标识VN和该VN内的租户系统VLAN时,还有其他注意事项。

3.1.2. Packet Lifetime Considerations
3.1.2. 数据包生存期注意事项

For L3 service, Tenant Systems should expect the IPv4 Time to Live (TTL) or IPv6 Hop Limit in the packets they send to be decremented by at least 1. For L2 service, neither the TTL nor the Hop Limit (when the packet is IP) is modified. The underlay network manages TTLs and Hop Limits in the outer IP encapsulation -- the values in these fields could be independent from or related to the values in the same fields of tenant IP packets.

对于L3服务,租户系统应期望其发送的数据包中的IPv4生存时间(TTL)或IPv6跃点限制至少减少1。对于L2服务,既不修改TTL也不修改跃点限制(当数据包为IP时)。参考底图网络管理外部IP封装中的TTL和跃点限制——这些字段中的值可以独立于租户IP数据包的相同字段中的值,也可以与这些字段中的值相关。

3.2. Network Virtualization Edge (NVE) Background
3.2. 网络虚拟化边缘(NVE)背景

Tenant Systems connect to NVEs via a Tenant System Interface (TSI). The TSI logically connects to the NVE via a Virtual Access Point (VAP), and each VAP is associated with one VN as shown in Figure 2. To the Tenant System, the TSI is like a NIC; the TSI presents itself to a Tenant System as a normal network interface. On the NVE side, a VAP is a logical network port (virtual or physical) into a specific virtual network. Note that two different Tenant Systems (and TSIs) attached to a common NVE can share a VAP (e.g., TS1 and TS2 in Figure 2) so long as they connect to the same VN.

租户系统通过租户系统接口(TSI)连接到NVE。TSI通过虚拟接入点(VAP)逻辑连接到NVE,每个VAP与一个VN关联,如图2所示。对于租户系统,TSI就像一个NIC;TSI作为一个普通的网络接口呈现给租户系统。在NVE端,VAP是进入特定虚拟网络的逻辑网络端口(虚拟或物理)。请注意,连接到公共NVE的两个不同租户系统(和TSI)可以共享VAP(例如,图2中的TS1和TS2),只要它们连接到相同的VN。

                    |         Data-Center Network (IP)        |
                    |                                         |
                    +-----------------------------------------+
                         |                           |
                         |       Tunnel Overlay      |
            +------------+---------+       +---------+------------+
            | +----------+-------+ |       | +-------+----------+ |
            | |  Overlay Module  | |       | |  Overlay Module  | |
            | +---------+--------+ |       | +---------+--------+ |
            |           |          |       |           |          |
     NVE1   |           |          |       |           |          | NVE2
            |  +--------+-------+  |       |  +--------+-------+  |
            |  | VNI1      VNI2 |  |       |  | VNI1      VNI2 |  |
            |  +-+----------+---+  |       |  +-+-----------+--+  |
            |    | VAP1     | VAP2 |       |    | VAP1      | VAP2|
            +----+----------+------+       +----+-----------+-----+
                 |          |                   |           |
                 |\         |                   |           |
                 | \        |                   |          /|
          -------+--\-------+-------------------+---------/-+-------
                 |   \      |     Tenant        |        /  |
            TSI1 |TSI2\     | TSI3            TSI1  TSI2/   TSI3
                +---+ +---+ +---+             +---+ +---+   +---+
                |TS1| |TS2| |TS3|             |TS4| |TS5|   |TS6|
                +---+ +---+ +---+             +---+ +---+   +---+
        
                    |         Data-Center Network (IP)        |
                    |                                         |
                    +-----------------------------------------+
                         |                           |
                         |       Tunnel Overlay      |
            +------------+---------+       +---------+------------+
            | +----------+-------+ |       | +-------+----------+ |
            | |  Overlay Module  | |       | |  Overlay Module  | |
            | +---------+--------+ |       | +---------+--------+ |
            |           |          |       |           |          |
     NVE1   |           |          |       |           |          | NVE2
            |  +--------+-------+  |       |  +--------+-------+  |
            |  | VNI1      VNI2 |  |       |  | VNI1      VNI2 |  |
            |  +-+----------+---+  |       |  +-+-----------+--+  |
            |    | VAP1     | VAP2 |       |    | VAP1      | VAP2|
            +----+----------+------+       +----+-----------+-----+
                 |          |                   |           |
                 |\         |                   |           |
                 | \        |                   |          /|
          -------+--\-------+-------------------+---------/-+-------
                 |   \      |     Tenant        |        /  |
            TSI1 |TSI2\     | TSI3            TSI1  TSI2/   TSI3
                +---+ +---+ +---+             +---+ +---+   +---+
                |TS1| |TS2| |TS3|             |TS4| |TS5|   |TS6|
                +---+ +---+ +---+             +---+ +---+   +---+
        

Figure 2: NVE Reference Model

图2:NVE参考模型

The Overlay Module performs the actual encapsulation and decapsulation of tunneled packets. The NVE maintains state about the virtual networks it is a part of so that it can provide the Overlay Module with information such as the destination address of the NVE to tunnel a packet to and the Context ID that should be placed in the encapsulation header to identify the virtual network that a tunneled packet belongs to.

覆盖模块执行隧道数据包的实际封装和解除封装。NVE维护其所属的虚拟网络的状态,以便能够向覆盖模块提供信息,例如要将数据包隧道到的NVE的目的地地址以及应该放置在封装报头中以识别隧道数据包所属的虚拟网络的上下文ID。

On the side facing the data-center network, the NVE sends and receives native IP traffic. When ingressing traffic from a Tenant System, the NVE identifies the egress NVE to which the packet should be sent, adds an overlay encapsulation header, and sends the packet on the underlay network. When receiving traffic from a remote NVE, an NVE strips off the encapsulation header and delivers the (original) packet to the appropriate Tenant System. When the source and destination Tenant System are on the same NVE, no encapsulation is needed and the NVE forwards traffic directly.

在面向数据中心网络的一侧,NVE发送和接收本机IP流量。当从租户系统进入流量时,NVE识别数据包应发送到的出口NVE,添加覆盖封装报头,并在底层网络上发送数据包。当从远程NVE接收流量时,NVE会剥离封装头并将(原始)数据包传递到适当的租户系统。当源和目标租户系统位于同一个NVE上时,不需要封装,NVE直接转发流量。

Conceptually, the NVE is a single entity implementing the NVO3 functionality. In practice, there are a number of different implementation scenarios, as described in detail in Section 4.

从概念上讲,NVE是实现NVO3功能的单个实体。在实践中,有许多不同的实现场景,如第4节中详细描述的。

3.3. Network Virtualization Authority (NVA) Background
3.3. 网络虚拟化管理局(NVA)后台

Address dissemination refers to the process of learning, building, and distributing the mapping/forwarding information that NVEs need in order to tunnel traffic to each other on behalf of communicating Tenant Systems. For example, in order to send traffic to a remote Tenant System, the sending NVE must know the destination NVE for that Tenant System.

地址分发是指学习、构建和分发NVE所需的映射/转发信息的过程,以便代表通信租户系统将通信量传输到彼此。例如,为了向远程租户系统发送流量,发送NVE必须知道该租户系统的目标NVE。

One way to build and maintain mapping tables is to use learning, as 802.1 bridges do [IEEE.802.1Q]. When forwarding traffic to multicast or unknown unicast destinations, an NVE could simply flood traffic. While flooding works, it can lead to traffic hot spots and to problems in larger networks (e.g., excessive amounts of flooded traffic).

构建和维护映射表的一种方法是使用学习,就像802.1网桥所做的那样[IEEE.802.1Q]。当将流量转发到多播或未知的单播目的地时,NVE可能会导致流量泛滥。当洪水泛滥起作用时,它可能会导致交通热点和较大网络中的问题(例如,过多的洪水流量)。

Alternatively, to reduce the scope of where flooding must take place, or to eliminate it all together, NVEs can make use of a Network Virtualization Authority (NVA). An NVA is the entity that provides address mapping and other information to NVEs. NVEs interact with an NVA to obtain any required address-mapping information they need in order to properly forward traffic on behalf of tenants. The term "NVA" refers to the overall system, without regard to its scope or how it is implemented. NVAs provide a service, and NVEs access that service via an NVE-NVA protocol as discussed in Section 8.

或者,为了减少必须发生泛洪的范围,或者一起消除泛洪,NVE可以使用网络虚拟化机构(NVA)。NVA是向NVE提供地址映射和其他信息的实体。NVE与NVA交互,以获取他们需要的任何地址映射信息,以便代表租户正确转发流量。术语“NVA”指的是整个系统,不考虑其范围或实施方式。NVA提供服务,NVE通过第8节中讨论的NVE-NVA协议访问该服务。

Even when an NVA is present, Ethernet bridge MAC address learning could be used as a fallback mechanism, should the NVA be unable to provide an answer or for other reasons. This document does not consider flooding approaches in detail, as there are a number of benefits in using an approach that depends on the presence of an NVA.

即使存在NVA,如果NVA无法提供答案或由于其他原因,以太网网桥MAC地址学习也可以用作回退机制。该文档没有详细考虑洪泛方法,因为在使用依赖于NVA存在的方法中有许多好处。

For the rest of this document, it is assumed that an NVA exists and will be used. NVAs are discussed in more detail in Section 7.

在本文档的其余部分中,假设存在并将使用NVA。第7节将更详细地讨论NVA。

3.4. VM Orchestration Systems
3.4. 虚拟机编排系统

VM orchestration systems manage server virtualization across a set of servers. Although VM management is a separate topic from network virtualization, the two areas are closely related. Managing the creation, placement, and movement of VMs also involves creating, attaching to, and detaching from virtual networks. A number of existing VM orchestration systems have incorporated aspects of virtual network management into their systems.

VM编排系统跨一组服务器管理服务器虚拟化。尽管虚拟机管理与网络虚拟化是一个独立的主题,但这两个领域密切相关。管理虚拟机的创建、放置和移动还包括创建、连接到虚拟网络以及从虚拟网络中分离虚拟机。许多现有的虚拟机编排系统已经将虚拟网络管理的各个方面合并到它们的系统中。

Note also that although this section uses the terms "VM" and "hypervisor" throughout, the same issues apply to other virtualization approaches, including Linux Containers (LXC), BSD Jails, Network Service Appliances as discussed in Section 5.1, etc. From an NVO3 perspective, it should be assumed that where the document uses the term "VM" and "hypervisor", the intention is that the discussion also applies to other systems, where, e.g., the host operating system plays the role of the hypervisor in supporting virtualization, and a container plays the equivalent role as a VM.

还请注意,尽管本节始终使用术语“VM”和“hypervisor”,但同样的问题也适用于其他虚拟化方法,包括Linux容器(LXC)、BSD监狱、网络服务设备,如第5.1节所述。从NVO3的角度来看,应该假设文档中使用术语“VM”和“虚拟机监控程序”,其目的是讨论也适用于其他系统,例如,主机操作系统在支持虚拟化方面扮演虚拟机监控程序的角色,而容器扮演虚拟机的同等角色。

When a new VM image is started, the VM orchestration system determines where the VM should be placed, interacts with the hypervisor on the target server to load and start the VM, and controls when a VM should be shut down or migrated elsewhere. VM orchestration systems also have knowledge about how a VM should connect to a network, possibly including the name of the virtual network to which a VM is to connect. The VM orchestration system can pass such information to the hypervisor when a VM is instantiated. VM orchestration systems have significant (and sometimes global) knowledge over the domain they manage. They typically know on what servers a VM is running, and metadata associated with VM images can be useful from a network virtualization perspective. For example, the metadata may include the addresses (MAC and IP) the VMs will use and the name(s) of the virtual network(s) they connect to.

启动新VM映像时,VM编排系统将确定VM应放置在何处,与目标服务器上的虚拟机监控程序交互以加载和启动VM,并控制VM何时应关闭或迁移到其他位置。虚拟机编排系统还了解虚拟机应如何连接到网络,可能包括虚拟机要连接到的虚拟网络的名称。当虚拟机被实例化时,虚拟机编排系统可以将此类信息传递给虚拟机监控程序。虚拟机编排系统对其管理的域具有重要的(有时是全局的)知识。他们通常知道虚拟机在什么服务器上运行,从网络虚拟化的角度来看,与虚拟机映像相关联的元数据非常有用。例如,元数据可能包括虚拟机将使用的地址(MAC和IP)以及它们连接到的虚拟网络的名称。

VM orchestration systems run a protocol with an agent running on the hypervisor of the servers they manage. That protocol can also carry information about what virtual network a VM is associated with. When the orchestrator instantiates a VM on a hypervisor, the hypervisor interacts with the NVE in order to attach the VM to the virtual networks it has access to. In general, the hypervisor will need to communicate significant VM state changes to the NVE. In the reverse direction, the NVE may need to communicate network connectivity information back to the hypervisor. Examples of deployed VM orchestration systems include VMware's vCenter Server, Microsoft's System Center Virtual Machine Manager, and systems based on OpenStack and its associated plugins (e.g., Nova and Neutron). Each can pass information about what virtual networks a VM connects to down to the

虚拟机编排系统运行一个协议,在其管理的服务器的虚拟机监控程序上运行一个代理。该协议还可以携带有关虚拟机与哪个虚拟网络关联的信息。当编排器在虚拟机监控程序上实例化虚拟机时,虚拟机监控程序将与NVE交互,以便将虚拟机连接到它可以访问的虚拟网络。通常,虚拟机监控程序需要将重要的VM状态更改传达给NVE。相反,NVE可能需要将网络连接信息传回虚拟机监控程序。部署的虚拟机编排系统的示例包括VMware的vCenter Server、Microsoft的System Center Virtual Machine Manager以及基于OpenStack及其相关插件(如Nova和Neuton)的系统。每个虚拟机都可以将虚拟机连接到的虚拟网络的信息向下传递到虚拟机

hypervisor. The protocol used between the VM orchestration system and hypervisors is generally proprietary.

虚拟机监控程序。VM编排系统和虚拟机监控程序之间使用的协议通常是专有的。

It should be noted that VM orchestration systems may not have direct access to all networking-related information a VM uses. For example, a VM may make use of additional IP or MAC addresses that the VM management system is not aware of.

应该注意的是,VM编排系统可能无法直接访问VM使用的所有网络相关信息。例如,虚拟机可以使用虚拟机管理系统不知道的其他IP或MAC地址。

4. Network Virtualization Edge (NVE)
4. 网络虚拟化边缘(NVE)

As introduced in Section 3.2, an NVE is the entity that implements the overlay functionality. This section describes NVEs in more detail. An NVE will have two external interfaces:

如第3.2节所述,NVE是实现覆盖功能的实体。本节将更详细地介绍NVE。NVE将有两个外部接口:

Facing the Tenant System: On the side facing the Tenant System, an NVE interacts with the hypervisor (or equivalent entity) to provide the NVO3 service. An NVE will need to be notified when a Tenant System "attaches" to a virtual network (so it can validate the request and set up any state needed to send and receive traffic on behalf of the Tenant System on that VN). Likewise, an NVE will need to be informed when the Tenant System "detaches" from the virtual network so that it can reclaim state and resources appropriately.

面向租户系统:在面向租户系统的一侧,NVE与虚拟机监控程序(或等效实体)交互以提供NVO3服务。当租户系统“连接”到虚拟网络时,需要通知NVE(因此它可以验证请求并设置代表该VN上的租户系统发送和接收流量所需的任何状态)。同样,当租户系统从虚拟网络“分离”时,需要通知NVE,以便它能够适当地回收状态和资源。

Facing the Data-Center Network: On the side facing the data-center network, an NVE interfaces with the data-center underlay network, sending and receiving tunneled packets to and from the underlay. The NVE may also run a control protocol with other entities on the network, such as the Network Virtualization Authority.

面向数据中心网络:在面向数据中心网络的一侧,NVE与数据中心参考底图网络接口,向参考底图发送和接收隧道数据包。NVE还可以与网络上的其他实体(例如网络虚拟化机构)运行控制协议。

4.1. NVE Co-located with Server Hypervisor
4.1. NVE与服务器虚拟机监控程序位于同一位置

When server virtualization is used, the entire NVE functionality will typically be implemented as part of the hypervisor and/or virtual switch on the server. In such cases, the Tenant System interacts with the hypervisor, and the hypervisor interacts with the NVE. Because the interaction between the hypervisor and NVE is implemented entirely in software on the server, there is no "on-the-wire" protocol between Tenant Systems (or the hypervisor) and the NVE that needs to be standardized. While there may be APIs between the NVE and hypervisor to support necessary interaction, the details of such APIs are not in scope for the NVO3 WG at the time of publication of this memo.

使用服务器虚拟化时,整个NVE功能通常将作为服务器上虚拟机监控程序和/或虚拟交换机的一部分实现。在这种情况下,租户系统与虚拟机监控程序交互,虚拟机监控程序与NVE交互。由于虚拟机监控程序和NVE之间的交互完全在服务器上的软件中实现,因此租户系统(或虚拟机监控程序)和NVE之间没有需要标准化的“在线”协议。虽然NVE和虚拟机监控程序之间可能存在支持必要交互的API,但在本备忘录发布时,NVO3工作组不负责此类API的详细信息。

Implementing NVE functionality entirely on a server has the disadvantage that server CPU resources must be spent implementing the NVO3 functionality. Experimentation with overlay approaches and previous experience with TCP and checksum adapter offloads suggest

完全在服务器上实现NVE功能的缺点是服务器CPU资源必须用于实现NVO3功能。覆盖方法的实验和以前TCP和校验和适配器卸载的经验表明

that offloading certain NVE operations (e.g., encapsulation and decapsulation operations) onto the physical network adapter can produce performance advantages. As has been done with checksum and/ or TCP server offload and other optimization approaches, there may be benefits to offloading common operations onto adapters where possible. Just as important, the addition of an overlay header can disable existing adapter offload capabilities that are generally not prepared to handle the addition of a new header or other operations associated with an NVE.

将某些NVE操作(例如封装和去封装操作)卸载到物理网络适配器上可以产生性能优势。正如校验和和/或TCP服务器卸载和其他优化方法所做的那样,尽可能将常见操作卸载到适配器上可能会有好处。同样重要的是,添加覆盖头可以禁用现有适配器卸载功能,这些功能通常不准备处理添加新头或与NVE相关的其他操作。

While the exact details of how to split the implementation of specific NVE functionality between a server and its network adapters are an implementation matter and outside the scope of IETF standardization, the NVO3 architecture should be cognizant of and support such separation. Ideally, it may even be possible to bypass the hypervisor completely on critical data-path operations so that packets between a Tenant System and its VN can be sent and received without having the hypervisor involved in each individual packet operation.

虽然如何在服务器及其网络适配器之间分割特定NVE功能的实现的确切细节是一个实现问题,不在IETF标准化的范围内,但NVO3体系结构应认识到并支持这种分离。理想情况下,甚至可以在关键数据路径操作上完全绕过虚拟机监控程序,以便在租户系统及其VN之间发送和接收数据包,而无需让虚拟机监控程序参与每个数据包操作。

4.2. Split-NVE
4.2. 拆分NVE

Another possible scenario leads to the need for a split-NVE implementation. An NVE running on a server (e.g., within a hypervisor) could support NVO3 service towards the tenant but not perform all NVE functions (e.g., encapsulation) directly on the server; some of the actual NVO3 functionality could be implemented on (i.e., offloaded to) an adjacent switch to which the server is attached. While one could imagine a number of link types between a server and the NVE, one simple deployment scenario would involve a server and NVE separated by a simple L2 Ethernet link. A more complicated scenario would have the server and NVE separated by a bridged access network, such as when the NVE resides on a Top of Rack (ToR) switch, with an embedded switch residing between servers and the ToR switch.

另一种可能的情况是需要拆分NVE实现。在服务器(例如,在虚拟机监控程序内)上运行的NVE可以支持针对租户的NVO3服务,但不能直接在服务器上执行所有NVE功能(例如,封装);一些实际的NVO3功能可以在服务器连接的相邻交换机上实现(即卸载到)。虽然可以想象服务器和NVE之间有许多链路类型,但一个简单的部署场景将涉及由简单的L2以太网链路分隔的服务器和NVE。更复杂的情况是,服务器和NVE通过桥接接入网络分开,例如,当NVE位于机架顶部(ToR)交换机上时,嵌入式交换机位于服务器和ToR交换机之间。

For the split-NVE case, protocols will be needed that allow the hypervisor and NVE to negotiate and set up the necessary state so that traffic sent across the access link between a server and the NVE can be associated with the correct virtual network instance. Specifically, on the access link, traffic belonging to a specific Tenant System would be tagged with a specific VLAN C-TAG that identifies which specific NVO3 virtual network instance it connects to. The hypervisor-NVE protocol would negotiate which VLAN C-TAG to use for a particular virtual network instance. More details of the protocol requirements for functionality between hypervisors and NVEs can be found in [NVE-NVA].

对于拆分的NVE情况,将需要允许虚拟机监控程序和NVE协商和设置必要状态的协议,以便通过服务器和NVE之间的访问链路发送的流量可以与正确的虚拟网络实例相关联。具体地说,在访问链路上,属于特定租户系统的流量将使用特定的VLAN C标记进行标记,该标记标识它连接到的特定NVO3虚拟网络实例。虚拟机监控程序NVE协议将协商为特定虚拟网络实例使用哪个VLAN C标记。有关虚拟机监控程序和NVE之间功能的协议要求的更多详细信息,请参见[NVE-NVA]。

4.2.1. Tenant VLAN Handling in Split-NVE Case
4.2.1. 拆分NVE情况下的租户VLAN处理

Preserving tenant VLAN tags across an NVO3 VN, as described in Section 3.1.1, poses additional complications in the split-NVE case. The portion of the NVE that performs the encapsulation function needs access to the specific VLAN tags that the Tenant System is using in order to include them in the encapsulated packet. When an NVE is implemented entirely within the hypervisor, the NVE has access to the complete original packet (including any VLAN tags) sent by the tenant. In the split-NVE case, however, the VLAN tag used between the hypervisor and offloaded portions of the NVE normally only identifies the specific VN that traffic belongs to. In order to allow a tenant to preserve VLAN information from end to end between Tenant Systems in the split-NVE case, additional mechanisms would be needed (e.g., carry an additional VLAN tag by carrying both a C-TAG and a Service VLAN Tag (S-TAG) as specified in [IEEE.802.1Q] where the C-TAG identifies the tenant VLAN end to end and the S-TAG identifies the VN locally between each Tenant System and the corresponding NVE).

如第3.1.1节所述,在NVO3 VN上保留租户VLAN标记会在拆分NVE的情况下带来额外的复杂性。执行封装功能的NVE部分需要访问租户系统正在使用的特定VLAN标记,以便将它们包含在封装的数据包中。当NVE完全在虚拟机监控程序中实现时,NVE可以访问租户发送的完整原始数据包(包括任何VLAN标记)。然而,在拆分NVE的情况下,虚拟机监控程序和NVE的卸载部分之间使用的VLAN标记通常只标识流量所属的特定VN。为了允许承租人在拆分NVE情况下保留承租人系统之间端到端的VLAN信息,需要额外的机制(例如,通过携带[IEEE.802.1Q]中规定的C标签和服务VLAN标签(S标签),携带额外的VLAN标签)其中,C标记端到端标识租户VLAN,S标记在每个租户系统和相应的NVE之间本地标识VN)。

4.3. NVE State
4.3. NVE州

NVEs maintain internal data structures and state to support the sending and receiving of tenant traffic. An NVE may need some or all of the following information:

NVE维护内部数据结构和状态,以支持租户流量的发送和接收。NVE可能需要以下部分或全部信息:

1. An NVE keeps track of which attached Tenant Systems are connected to which virtual networks. When a Tenant System attaches to a virtual network, the NVE will need to create or update the local state for that virtual network. When the last Tenant System detaches from a given VN, the NVE can reclaim state associated with that VN.

1. NVE跟踪哪些连接的租户系统连接到哪些虚拟网络。当租户系统连接到虚拟网络时,NVE将需要创建或更新该虚拟网络的本地状态。当最后一个租户系统从给定的VN分离时,NVE可以回收与该VN关联的状态。

2. For tenant unicast traffic, an NVE maintains a per-VN table of mappings from Tenant System (inner) addresses to remote NVE (outer) addresses.

2. 对于租户单播通信,NVE维护从租户系统(内部)地址到远程NVE(外部)地址的映射表。

3. For tenant multicast (or broadcast) traffic, an NVE maintains a per-VN table of mappings and other information on how to deliver tenant multicast (or broadcast) traffic. If the underlying network supports IP multicast, the NVE could use IP multicast to deliver tenant traffic. In such a case, the NVE would need to know what IP underlay multicast address to use for a given VN. Alternatively, if the underlying network does not support multicast, a source NVE could use unicast replication to deliver traffic. In such a case, an NVE would need to know which remote NVEs are participating in the VN. An NVE could use both approaches, switching from one mode to the other depending on

3. 对于租户多播(或广播)流量,NVE维护每个VN的映射表以及关于如何交付租户多播(或广播)流量的其他信息。如果底层网络支持IP多播,则NVE可以使用IP多播来交付租户流量。在这种情况下,NVE需要知道对于给定的VN使用什么IP底层多播地址。或者,如果基础网络不支持多播,则源NVE可以使用单播复制来传递流量。在这种情况下,NVE需要知道哪些远程NVE正在参与VN。NVE可以使用两种方法,根据具体情况从一种模式切换到另一种模式

factors such as bandwidth efficiency and group membership sparseness. [FRAMEWORK-MCAST] discusses the subject of multicast handling in NVO3 in further detail.

带宽效率和组成员稀疏性等因素。[FRAMEWORK-MCAST]进一步详细讨论了NVO3中的多播处理主题。

4. An NVE maintains necessary information to encapsulate outgoing traffic, including what type of encapsulation and what value to use for a Context ID to identify the VN within the encapsulation header.

4. NVE维护封装传出流量所需的信息,包括封装类型和上下文ID使用什么值来标识封装头中的VN。

5. In order to deliver incoming encapsulated packets to the correct Tenant Systems, an NVE maintains the necessary information to map incoming traffic to the appropriate VAP (i.e., TSI).

5. 为了将传入的封装数据包传送到正确的租户系统,NVE维护必要的信息,以将传入流量映射到适当的VAP(即TSI)。

6. An NVE may find it convenient to maintain additional per-VN information such as QoS settings, Path MTU information, Access Control Lists (ACLs), etc.

6. NVE可能会发现维护额外的每VN信息很方便,例如QoS设置、路径MTU信息、访问控制列表(ACL)等。

4.4. Multihoming of NVEs
4.4. NVEs的多归宿

NVEs may be multihomed. That is, an NVE may have more than one IP address associated with it on the underlay network. Multihoming happens in two different scenarios. First, an NVE may have multiple interfaces connecting it to the underlay. Each of those interfaces will typically have a different IP address, resulting in a specific Tenant Address (on a specific VN) being reachable through the same NVE but through more than one underlay IP address. Second, a specific Tenant System may be reachable through more than one NVE, each having one or more underlay addresses. In both cases, NVE address-mapping functionality needs to support one-to-many mappings and enable a sending NVE to (at a minimum) be able to fail over from one IP address to another, e.g., should a specific NVE underlay address become unreachable.

NVE可以是多址的。也就是说,NVE在参考底图网络上可能有多个与其关联的IP地址。多宿主发生在两种不同的场景中。首先,NVE可能有多个接口将其连接到参考底图。这些接口中的每一个通常都有一个不同的IP地址,从而可以通过同一个NVE但通过多个参考底图IP地址访问特定的租户地址(在特定的VN上)。其次,可以通过多个NVE访问特定租户系统,每个NVE都有一个或多个参考底图地址。在这两种情况下,NVE地址映射功能都需要支持一对多映射,并使发送到的NVE(至少)能够从一个IP地址故障切换到另一个IP地址,例如,如果无法访问特定的NVE参考底图地址。

Finally, multihomed NVEs introduce complexities when source unicast replication is used to implement tenant multicast as described in Section 4.3. Specifically, an NVE should only receive one copy of a replicated packet.

最后,如第4.3节所述,当使用源单播复制来实现租户多播时,多宿NVE会带来复杂性。具体而言,NVE应该只接收复制数据包的一个副本。

Multihoming is needed to support important use cases. First, a bare metal server may have multiple uplink connections to either the same or different NVEs. Having only a single physical path to an upstream NVE, or indeed, having all traffic flow through a single NVE would be considered unacceptable in highly resilient deployment scenarios that seek to avoid single points of failure. Moreover, in today's networks, the availability of multiple paths would require that they be usable in an active-active fashion (e.g., for load balancing).

支持重要用例需要多宿主。首先,裸机服务器可能具有到相同或不同NVE的多个上行链路连接。在寻求避免单点故障的高弹性部署场景中,只有一条通向上游NVE的物理路径,或者实际上,所有流量都通过一个NVE是不可接受的。此外,在当今的网络中,多条路径的可用性要求它们以主动-主动方式可用(例如,用于负载平衡)。

4.5. Virtual Access Point (VAP)
4.5. 虚拟接入点(VAP)

The VAP is the NVE side of the interface between the NVE and the TS. Traffic to and from the tenant flows through the VAP. If an NVE runs into difficulties sending traffic received on the VAP, it may need to signal such errors back to the VAP. Because the VAP is an emulation of a physical port, its ability to signal NVE errors is limited and lacks sufficient granularity to reflect all possible errors an NVE may encounter (e.g., inability to reach a particular destination). Some errors, such as an NVE losing all of its connections to the underlay, could be reflected back to the VAP by effectively disabling it. This state change would reflect itself on the TS as an interface going down, allowing the TS to implement interface error handling (e.g., failover) in the same manner as when a physical interface becomes disabled.

VAP是NVE和TS之间接口的NVE侧。进出租户的流量流经VAP。如果NVE在发送VAP上接收的流量时遇到困难,则可能需要向VAP发回此类错误的信号。由于VAP是对物理端口的模拟,因此其发送NVE错误信号的能力有限,并且缺乏足够的粒度来反映NVE可能遇到的所有可能错误(例如,无法到达特定目的地)。一些错误(例如NVE失去了与参考底图的所有连接)可以通过有效禁用VAP反射回VAP。此状态更改将在TS上反映为接口故障,允许TS以与物理接口禁用时相同的方式实施接口错误处理(例如,故障切换)。

5. Tenant System Types
5. 租户系统类型

This section describes a number of special Tenant System types and how they fit into an NVO3 system.

本节介绍了一些特殊的租户系统类型,以及它们如何适应NVO3系统。

5.1. Overlay-Aware Network Service Appliances
5.1. 覆盖感知网络服务设备

Some Network Service Appliances [NVE-NVA] (virtual or physical) provide tenant-aware services. That is, the specific service they provide depends on the identity of the tenant making use of the service. For example, firewalls are now becoming available that support multitenancy where a single firewall provides virtual firewall service on a per-tenant basis, using per-tenant configuration rules and maintaining per-tenant state. Such appliances will be aware of the VN an activity corresponds to while processing requests. Unlike server virtualization, which shields VMs from needing to know about multitenancy, a Network Service Appliance may explicitly support multitenancy. In such cases, the Network Service Appliance itself will be aware of network virtualization and either embed an NVE directly or implement a split-NVE as described in Section 4.2. Unlike server virtualization, however, the Network Service Appliance may not be running a hypervisor, and the VM orchestration system may not interact with the Network Service Appliance. The NVE on such appliances will need to support a control plane to obtain the necessary information needed to fully participate in an NV Domain.

一些网络服务设备[NVE-NVA](虚拟或物理)提供租户感知服务。也就是说,他们提供的特定服务取决于使用该服务的租户的身份。例如,现在可以使用支持多租户的防火墙,其中单个防火墙基于每个租户提供虚拟防火墙服务,使用每个租户配置规则并维护每个租户状态。这样的设备在处理请求时会意识到活动对应的VN。与服务器虚拟化不同,虚拟机不需要了解多租户,网络服务设备可以明确支持多租户。在这种情况下,网络服务设备本身将意识到网络虚拟化,并按照第4.2节所述直接嵌入NVE或实施拆分NVE。但是,与服务器虚拟化不同,Network Service Appliance可能没有运行虚拟机监控程序,并且VM编排系统可能不会与Network Service Appliance交互。此类设备上的NVE需要支持控制平面,以获取完全参与NV域所需的必要信息。

5.2. Bare Metal Servers
5.2. 裸机服务器

Many data centers will continue to have at least some servers operating as non-virtualized (or "bare metal") machines running a traditional operating system and workload. In such systems, there will be no NVE functionality on the server, and the server will have no knowledge of NVO3 (including whether overlays are even in use). In such environments, the NVE functionality can reside on the first-hop physical switch. In such a case, the network administrator would (manually) configure the switch to enable the appropriate NVO3 functionality on the switch port connecting the server and associate that port with a specific virtual network. Such configuration would typically be static, since the server is not virtualized and, once configured, is unlikely to change frequently. Consequently, this scenario does not require any protocol or standards work.

许多数据中心将继续至少有一些服务器作为非虚拟(或“裸机”)机器运行,运行传统的操作系统和工作负载。在这样的系统中,服务器上将没有NVE功能,服务器将不知道NVO3(包括覆盖是否正在使用)。在这种环境中,NVE功能可以驻留在第一跳物理交换机上。在这种情况下,网络管理员将(手动)配置交换机,以便在连接服务器的交换机端口上启用适当的NVO3功能,并将该端口与特定虚拟网络关联。这种配置通常是静态的,因为服务器不是虚拟化的,一旦配置,就不太可能频繁更改。因此,该场景不需要任何协议或标准工作。

5.3. Gateways
5.3. 通道

Gateways on VNs relay traffic onto and off of a virtual network. Tenant Systems use gateways to reach destinations outside of the local VN. Gateways receive encapsulated traffic from one VN, remove the encapsulation header, and send the native packet out onto the data-center network for delivery. Outside traffic enters a VN in a reverse manner.

VNs上的网关将流量中继到虚拟网络上或从虚拟网络上断开。租户系统使用网关到达本地VN之外的目的地。网关从一个VN接收封装的通信量,移除封装头,并将本机数据包发送到数据中心网络进行交付。外部交通以相反的方式进入VN。

Gateways can be either virtual (i.e., implemented as a VM) or physical (i.e., a standalone physical device). For performance reasons, standalone hardware gateways may be desirable in some cases. Such gateways could consist of a simple switch forwarding traffic from a VN onto the local data-center network or could embed router functionality. On such gateways, network interfaces connecting to virtual networks will (at least conceptually) embed NVE (or split-NVE) functionality within them. As in the case with Network Service Appliances, gateways may not support a hypervisor and will need an appropriate control-plane protocol to obtain the information needed to provide NVO3 service.

网关可以是虚拟的(即实现为VM)或物理的(即独立的物理设备)。出于性能原因,在某些情况下可能需要独立硬件网关。这样的网关可以由一个简单的交换机组成,将流量从VN转发到本地数据中心网络,也可以嵌入路由器功能。在这样的网关上,连接到虚拟网络的网络接口将(至少在概念上)在其中嵌入NVE(或拆分NVE)功能。与网络服务设备的情况一样,网关可能不支持虚拟机监控程序,需要适当的控制平面协议来获取提供NVO3服务所需的信息。

Gateways handle several different use cases. For example, one use case consists of systems supporting overlays together with systems that do not (e.g., bare metal servers). Gateways could be used to connect legacy systems supporting, e.g., L2 VLANs, to specific virtual networks, effectively making them part of the same virtual network. Gateways could also forward traffic between a virtual network and other hosts on the data-center network or relay traffic between different VNs. Finally, gateways can provide external connectivity such as Internet or VPN access.

网关处理几个不同的用例。例如,一个用例包括支持覆盖的系统和不支持覆盖的系统(例如裸机服务器)。网关可用于将支持(例如)L2 VLAN的遗留系统连接到特定虚拟网络,有效地使它们成为同一虚拟网络的一部分。网关还可以在虚拟网络和数据中心网络上的其他主机之间转发流量,或者在不同的VN之间中继流量。最后,网关可以提供外部连接,如Internet或VPN访问。

5.3.1. Gateway Taxonomy
5.3.1. 网关分类法

As can be seen from the discussion above, there are several types of gateways that can exist in an NVO3 environment. This section breaks them down into the various types that could be supported. Note that each of the types below could be either implemented in a centralized manner or distributed to coexist with the NVEs.

从上面的讨论可以看出,NVO3环境中可以存在几种类型的网关。本节将它们细分为可支持的各种类型。请注意,下面的每种类型都可以集中实现,也可以分布以与NVE共存。

5.3.1.1. L2 Gateways (Bridging)
5.3.1.1. 二级网关(桥接)

L2 Gateways act as Layer 2 bridges to forward Ethernet frames based on the MAC addresses present in them.

二级网关充当第二层网桥,根据其中存在的MAC地址转发以太网帧。

L2 VN to Legacy L2: This type of gateway bridges traffic between L2 VNs and other legacy L2 networks such as VLANs or L2 VPNs.

L2 VN到传统L2:这种类型的网关在L2 VN和其他传统L2网络(如VLAN或L2 VPN)之间架起了通信量的桥梁。

L2 VN to L2 VN: The main motivation for this type of gateway is to create separate groups of Tenant Systems using L2 VNs such that the gateway can enforce network policies between each L2 VN.

L2 VN到L2 VN:这种网关的主要动机是使用L2 VN创建单独的租户系统组,以便网关可以在每个L2 VN之间实施网络策略。

5.3.1.2. L3 Gateways (Only IP Packets)
5.3.1.2. L3网关(仅限IP数据包)

L3 Gateways forward IP packets based on the IP addresses present in the packets.

L3网关根据数据包中存在的IP地址转发IP数据包。

L3 VN to Legacy L2: This type of gateway forwards packets between L3 VNs and legacy L2 networks such as VLANs or L2 VPNs. The original sender's destination MAC address in any frames that the gateway forwards from a legacy L2 network would be the MAC address of the gateway.

L3 VN到传统L2:这种类型的网关在L3 VN和传统L2网络(如VLAN或L2 VPN)之间转发数据包。在网关从传统L2网络转发的任何帧中,原始发送方的目标MAC地址将是网关的MAC地址。

L3 VN to Legacy L3: This type of gateway forwards packets between L3 VNs and legacy L3 networks. These legacy L3 networks could be local to the data center, be in the WAN, or be an L3 VPN.

L3 VN到传统L3:这种类型的网关在L3 VN和传统L3网络之间转发数据包。这些传统的L3网络可以是数据中心的本地网络、WAN网络或L3 VPN网络。

L3 VN to L2 VN: This type of gateway forwards packets between L3 VNs and L2 VNs. The original sender's destination MAC address in any frames that the gateway forwards from a L2 VN would be the MAC address of the gateway.

L3 VN到L2 VN:这种类型的网关在L3 VN和L2 VN之间转发数据包。网关从L2 VN转发的任何帧中的原始发送方的目标MAC地址将是网关的MAC地址。

L2 VN to L2 VN: This type of gateway acts similar to a traditional router that forwards between L2 interfaces. The original sender's destination MAC address in any frames that the gateway forwards from any of the L2 VNs would be the MAC address of the gateway.

L2-VN到L2-VN:这种网关的作用类似于在L2接口之间转发的传统路由器。网关从任何L2 VN转发的任何帧中的原始发送方的目标MAC地址将是网关的MAC地址。

L3 VN to L3 VN: The main motivation for this type of gateway is to create separate groups of Tenant Systems using L3 VNs such that the gateway can enforce network policies between each L3 VN.

L3 VN到L3 VN:这种网关的主要动机是使用L3 VN创建单独的租户系统组,以便网关可以在每个L3 VN之间实施网络策略。

5.4. Distributed Inter-VN Gateways
5.4. 分布式VN网关

The relaying of traffic from one VN to another deserves special consideration. Whether traffic is permitted to flow from one VN to another is a matter of policy and would not (by default) be allowed unless explicitly enabled. In addition, NVAs are the logical place to maintain policy information about allowed inter-VN communication. Policy enforcement for inter-VN communication can be handled in (at least) two different ways. Explicit gateways could be the central point for such enforcement, with all inter-VN traffic forwarded to such gateways for processing. Alternatively, the NVA can provide such information directly to NVEs by either providing a mapping for a target Tenant System (TS) on another VN or indicating that such communication is disallowed by policy.

从一个VN到另一个VN的流量中继值得特别考虑。是否允许流量从一个VN流向另一个VN是一个政策问题,除非明确启用,否则(默认情况下)不允许流量。此外,NVA是维护有关允许的VN间通信的策略信息的逻辑位置。VN间通信的策略实施可以(至少)以两种不同的方式处理。显式网关可以是此类强制执行的中心点,所有VN间流量都转发到此类网关进行处理。或者,NVA可以通过为另一个VN上的目标租户系统(TS)提供映射或指示策略不允许此类通信,直接向NVE提供此类信息。

When inter-VN gateways are centralized, traffic between TSs on different VNs can take suboptimal paths, i.e., triangular routing results in paths that always traverse the gateway. In the worst case, traffic between two TSs connected to the same NVE can be hair-pinned through an external gateway. As an optimization, individual NVEs can be part of a distributed gateway that performs such relaying, reducing or completely eliminating triangular routing. In a distributed gateway, each ingress NVE can perform such relaying activity directly so long as it has access to the policy information needed to determine whether cross-VN communication is allowed. Having individual NVEs be part of a distributed gateway allows them to tunnel traffic directly to the destination NVE without the need to take suboptimal paths.

当跨VN网关集中时,不同VN上的TSs之间的流量可能会采用次优路径,即三角形路由导致路径始终穿过网关。在最坏的情况下,连接到同一个NVE的两个TSs之间的流量可以通过外部网关固定。作为优化,单个NVE可以是分布式网关的一部分,该网关执行此类中继、减少或完全消除三角形路由。在分布式网关中,每个入口NVE可以直接执行这种中继活动,只要它能够访问确定是否允许跨VN通信所需的策略信息。将单个NVE作为分布式网关的一部分,允许它们将流量直接隧道到目标NVE,而无需采用次优路径。

The NVO3 architecture supports distributed gateways for the case of inter-VN communication. Such support requires that NVO3 control protocols include mechanisms for the maintenance and distribution of policy information about what type of cross-VN communication is allowed so that NVEs acting as distributed gateways can tunnel traffic from one VN to another as appropriate.

NVO3体系结构支持VN间通信的分布式网关。这种支持要求NVO3控制协议包括用于维护和分发关于允许哪种类型的跨VN通信的策略信息的机制,以便充当分布式网关的NVE可以根据需要将流量从一个VN隧道到另一个VN。

Distributed gateways could also be used to distribute other traditional router services to individual NVEs. The NVO3 architecture does not preclude such implementations but does not define or require them as they are outside the scope of the NVO3 architecture.

分布式网关还可用于将其他传统路由器服务分发给各个NVE。NVO3体系结构不排除此类实现,但不定义或要求它们,因为它们不在NVO3体系结构的范围内。

5.5. ARP and Neighbor Discovery
5.5. ARP与邻居发现

Strictly speaking, for an L2 service, special processing of the Address Resolution Protocol (ARP) [RFC826] and IPv6 Neighbor Discovery (ND) [RFC4861] is not required. ARP requests are broadcast, and an NVO3 can deliver ARP requests to all members of a given L2 virtual network just as it does for any packet sent to an L2 broadcast address. Similarly, ND requests are sent via IP multicast, which NVO3 can support by delivering via L2 multicast. However, as a performance optimization, an NVE can intercept ARP (or ND) requests from its attached TSs and respond to them directly using information in its mapping tables. Since an NVE will have mechanisms for determining the NVE address associated with a given TS, the NVE can leverage the same mechanisms to suppress sending ARP and ND requests for a given TS to other members of the VN. The NVO3 architecture supports such a capability.

严格地说,对于二级服务,不需要对地址解析协议(ARP)[RFC826]和IPv6邻居发现(ND)[RFC4861]进行特殊处理。ARP请求是广播的,NVO3可以向给定二级虚拟网络的所有成员发送ARP请求,就像发送到二级广播地址的任何数据包一样。类似地,ND请求通过IP多播发送,NVO3可以通过L2多播提供支持。然而,作为一种性能优化,NVE可以拦截来自其附加TSs的ARP(或ND)请求,并使用其映射表中的信息直接响应这些请求。由于NVE将具有用于确定与给定TS相关联的NVE地址的机制,因此NVE可以利用相同的机制来抑制向VN的其他成员发送针对给定TS的ARP和ND请求。NVO3体系结构支持这种能力。

6. NVE-NVE Interaction
6. NVE-NVE相互作用

Individual NVEs will interact with each other for the purposes of tunneling and delivering traffic to remote TSs. At a minimum, a control protocol may be needed for tunnel setup and maintenance. For example, tunneled traffic may need to be encrypted or integrity protected, in which case it will be necessary to set up appropriate security associations between NVE peers. It may also be desirable to perform tunnel maintenance (e.g., continuity checks) on a tunnel in order to detect when a remote NVE becomes unreachable. Such generic tunnel setup and maintenance functions are not generally NVO3-specific. Hence, the NVO3 architecture expects to leverage existing tunnel maintenance protocols rather than defining new ones.

单个NVE将相互交互,以便通过隧道传输和向远程TSs传输流量。隧道设置和维护至少需要一个控制协议。例如,隧道通信可能需要加密或完整性保护,在这种情况下,有必要在NVE对等方之间建立适当的安全关联。还可能需要在隧道上执行隧道维护(例如,连续性检查),以便检测何时无法访问远程NVE。此类通用隧道设置和维护功能通常不特定于NVO3。因此,NVO3体系结构希望利用现有的隧道维护协议,而不是定义新的协议。

Some NVE-NVE interactions may be specific to NVO3 (in particular, be related to information kept in mapping tables) and agnostic to the specific tunnel type being used. For example, when tunneling traffic for TS-X to a remote NVE, it is possible that TS-X is not presently associated with the remote NVE. Normally, this should not happen, but there could be race conditions where the information an NVE has learned from the NVA is out of date relative to actual conditions. In such cases, the remote NVE could return an error or warning indication, allowing the sending NVE to attempt a recovery or otherwise attempt to mitigate the situation.

一些NVE-NVE相互作用可能特定于NVO3(特别是与映射表中保存的信息相关),并且与所使用的特定隧道类型无关。例如,当将TS-X的通信量隧道传输到远程NVE时,TS-X当前可能不与远程NVE相关联。通常情况下,这种情况不应发生,但可能存在竞赛条件,即NVE从NVA获得的信息相对于实际情况已过时。在这种情况下,远程NVE可能返回错误或警告指示,允许发送NVE尝试恢复或以其他方式尝试缓解情况。

The NVE-NVE interaction could signal a range of indications, for example:

NVE-NVE相互作用可发出一系列指示信号,例如:

o "No such TS here", upon a receipt of a tunneled packet for an unknown TS

o “此处无此类TS”,收到未知TS的隧道数据包时

o "TS-X not here, try the following NVE instead" (i.e., a redirect)

o “TS-X不在此处,请尝试以下NVE”(即重定向)

o "Delivered to correct NVE but could not deliver packet to TS-X"

o “已传递到正确的NVE,但无法将数据包传递到TS-X”

When an NVE receives information from a remote NVE that conflicts with the information it has in its own mapping tables, it should consult with the NVA to resolve those conflicts. In particular, it should confirm that the information it has is up to date, and it might indicate the error to the NVA so as to nudge the NVA into following up (as appropriate). While it might make sense for an NVE to update its mapping table temporarily in response to an error from a remote NVE, any changes must be handled carefully as doing so can raise security considerations if the received information cannot be authenticated. That said, a sending NVE might still take steps to mitigate a problem, such as applying rate limiting to data traffic towards a particular NVE or TS.

当一个NVE从一个远程NVE接收到与它自己的映射表中的信息冲突的信息时,它应该咨询NVA以解决这些冲突。特别是,它应该确认它拥有的信息是最新的,并且它可能会向NVA指出错误,以便推动NVA跟进(视情况而定)。虽然NVE临时更新其映射表以响应远程NVE的错误可能是有意义的,但必须小心处理任何更改,因为如果接收到的信息无法通过身份验证,这样做可能会引起安全考虑。这就是说,发送NVE可能仍会采取措施来缓解问题,例如对特定NVE或TS的数据流量应用速率限制。

7. Network Virtualization Authority (NVA)
7. 网络虚拟化管理局(NVA)

Before sending traffic to and receiving traffic from a virtual network, an NVE must obtain the information needed to build its internal forwarding tables and state as listed in Section 4.3. An NVE can obtain such information from a Network Virtualization Authority (NVA).

在向虚拟网络发送通信量和从虚拟网络接收通信量之前,NVE必须获得构建其内部转发表所需的信息和状态,如第4.3节所列。NVE可以从网络虚拟化机构(NVA)获取此类信息。

The NVA is the entity that is expected to provide address mapping and other information to NVEs. NVEs can interact with an NVA to obtain any required information they need in order to properly forward traffic on behalf of tenants. The term "NVA" refers to the overall system, without regard to its scope or how it is implemented.

NVA是期望向NVE提供地址映射和其他信息的实体。NVE可以与NVA交互,以获取他们需要的任何信息,以便代表租户正确转发流量。术语“NVA”指的是整个系统,不考虑其范围或实施方式。

7.1. How an NVA Obtains Information
7.1. NVA如何获取信息

There are two primary ways in which an NVA can obtain the address dissemination information it manages: from the VM orchestration system and/or directly from the NVEs themselves.

NVA可以通过两种主要方式获得其管理的地址分发信息:从VM编排系统和/或直接从NVE本身。

On virtualized systems, the NVA may be able to obtain the address-mapping information associated with VMs from the VM orchestration system itself. If the VM orchestration system contains a master database for all the virtualization information, having the NVA obtain information directly from the orchestration system would be a natural approach. Indeed, the NVA could effectively be co-located with the VM orchestration system itself. In such systems, the VM orchestration system communicates with the NVE indirectly through the hypervisor.

在虚拟化系统上,NVA可能能够从VM编排系统本身获取与VM相关联的地址映射信息。如果VM编排系统包含用于所有虚拟化信息的主数据库,那么让NVA直接从编排系统获取信息将是一种自然的方法。事实上,NVA可以有效地与VM编排系统本身共存。在这样的系统中,VM编排系统通过虚拟机监控程序间接与NVE通信。

However, as described in Section 4, not all NVEs are associated with hypervisors. In such cases, NVAs cannot leverage VM orchestration protocols to interact with an NVE and will instead need to peer directly with them. By peering directly with an NVE, NVAs can obtain information about the TSs connected to that NVE and can distribute information to the NVE about the VNs those TSs are associated with. For example, whenever a Tenant System attaches to an NVE, that NVE would notify the NVA that the TS is now associated with that NVE. Likewise, when a TS detaches from an NVE, that NVE would inform the NVA. By communicating directly with NVEs, both the NVA and the NVE are able to maintain up-to-date information about all active tenants and the NVEs to which they are attached.

但是,如第4节所述,并非所有NVE都与虚拟机监控程序关联。在这种情况下,NVA无法利用VM编排协议与NVE交互,而是需要直接与它们进行对等。通过与NVE直接对等,NVA可以获得有关连接到该NVE的TSs的信息,并可以向NVE分发有关这些TSs关联的VN的信息。例如,每当租户系统连接到NVE时,该NVE都会通知NVA TS现在与该NVE关联。同样,当TS从NVE分离时,该NVE将通知NVA。通过与NVE直接通信,NVA和NVE都能够维护有关所有活动租户及其所属NVE的最新信息。

7.2. Internal NVA Architecture
7.2. 内部NVA架构

For reliability and fault tolerance reasons, an NVA would be implemented in a distributed or replicated manner without single points of failure. How the NVA is implemented, however, is not important to an NVE so long as the NVA provides a consistent and well-defined interface to the NVE. For example, an NVA could be implemented via database techniques whereby a server stores address-mapping information in a traditional (possibly replicated) database. Alternatively, an NVA could be implemented in a distributed fashion using an existing (or modified) routing protocol to maintain and distribute mappings. So long as there is a clear interface between the NVE and NVA, how an NVA is architected and implemented is not important to an NVE.

出于可靠性和容错的原因,NVA将以分布式或复制的方式实现,不会出现单点故障。然而,只要NVA为NVE提供一致且定义良好的接口,NVA的实现方式对NVE并不重要。例如,可以通过数据库技术来实现NVA,其中服务器将地址映射信息存储在传统(可能是复制的)数据库中。或者,可以使用现有(或修改过的)路由协议以分布式方式实现NVA,以维护和分发映射。只要NVE和NVA之间有一个清晰的接口,NVA的架构和实现方式对NVE并不重要。

A number of architectural approaches could be used to implement NVAs themselves. NVAs manage address bindings and distribute them to where they need to go. One approach would be to use the Border Gateway Protocol (BGP) [RFC4364] (possibly with extensions) and route reflectors. Another approach could use a transaction-based database model with replicated servers. Because the implementation details are local to an NVA, there is no need to pick exactly one solution technology, so long as the external interfaces to the NVEs (and remote NVAs) are sufficiently well defined to achieve interoperability.

许多体系结构方法可用于实现NVA本身。NVA管理地址绑定并将其分发到需要的位置。一种方法是使用边界网关协议(BGP)[RFC4364](可能带有扩展)和路由反射器。另一种方法可以使用具有复制服务器的基于事务的数据库模型。由于实现细节是NVA本地的,因此无需选择一种解决方案技术,只要NVS(和远程NVA)的外部接口定义充分,以实现互操作性。

7.3. NVA External Interface
7.3. NVA外部接口

Conceptually, from the perspective of an NVE, an NVA is a single entity. An NVE interacts with the NVA, and it is the NVA's responsibility to ensure that interactions between the NVE and NVA result in consistent behavior across the NVA and all other NVEs using the same NVA. Because an NVA is built from multiple internal components, an NVA will have to ensure that information flows to all internal NVA components appropriately.

从概念上讲,从NVE的角度来看,NVA是一个单一实体。NVE与NVA相互作用,NVA有责任确保NVE与NVA之间的相互作用在NVA和使用相同NVA的所有其他NVE之间产生一致的行为。由于NVA由多个内部组件构建,因此NVA必须确保信息适当地流向所有内部NVA组件。

One architectural question is how the NVA presents itself to the NVE. For example, an NVA could be required to provide access via a single IP address. If NVEs only have one IP address to interact with, it would be the responsibility of the NVA to handle NVA component failures, e.g., by using a "floating IP address" that migrates among NVA components to ensure that the NVA can always be reached via the one address. Having all NVA accesses through a single IP address, however, adds constraints to implementing robust failover, load balancing, etc.

一个架构问题是NVA如何向NVE展示自己。例如,可以要求NVA通过单个IP地址提供访问。如果NVE只有一个IP地址可与之交互,则NVA将负责处理NVA组件故障,例如,使用在NVA组件之间迁移的“浮动IP地址”,以确保始终可以通过一个地址访问NVA。但是,通过单个IP地址进行所有NVA访问会对实现健壮的故障切换、负载平衡等增加限制。

In the NVO3 architecture, an NVA is accessed through one or more IP addresses (or an IP address/port combination). If multiple IP addresses are used, each IP address provides equivalent functionality, meaning that an NVE can use any of the provided addresses to interact with the NVA. Should one address stop working, an NVE is expected to failover to another. While the different addresses result in equivalent functionality, one address may respond more quickly than another, e.g., due to network conditions, load on the server, etc.

在NVO3体系结构中,通过一个或多个IP地址(或IP地址/端口组合)访问NVA。如果使用多个IP地址,则每个IP地址提供等效功能,这意味着NVE可以使用提供的任何地址与NVA交互。如果一个地址停止工作,则预期NVE将故障转移到另一个地址。虽然不同的地址会产生相同的功能,但一个地址的响应速度可能比另一个地址更快,例如,由于网络条件、服务器负载等原因。

To provide some control over load balancing, NVA addresses may have an associated priority. Addresses are used in order of priority, with no explicit preference among NVA addresses having the same priority. To provide basic load balancing among NVAs of equal priorities, NVEs could use some randomization input to select among equal-priority NVAs. Such a priority scheme facilitates failover and load balancing, for example, by allowing a network operator to specify a set of primary and backup NVAs.

为了对负载平衡提供某种控制,NVA地址可能具有相关的优先级。地址按优先级顺序使用,具有相同优先级的NVA地址之间没有明确的优先权。为了在具有相同优先级的NVA之间提供基本的负载平衡,NVS可以使用一些随机化输入在具有相同优先级的NVA之间进行选择。例如,通过允许网络运营商指定一组主NVA和备份NVA,这种优先级方案有助于故障切换和负载平衡。

It may be desirable to have individual NVA addresses responsible for a subset of information about an NV Domain. In such a case, NVEs would use different NVA addresses for obtaining or updating information about particular VNs or TS bindings. Key questions with such an approach are how information would be partitioned and how an NVE could determine which address to use to get the information it needs.

可能希望有单个NVA地址负责有关NV域的信息子集。在这种情况下,NVE将使用不同的NVA地址来获取或更新有关特定VN或TS绑定的信息。这种方法的关键问题是如何划分信息,以及NVE如何确定使用哪个地址来获取所需信息。

Another possibility is to treat the information on which NVA addresses to use as cached (soft-state) information at the NVEs, so that any NVA address can be used to obtain any information, but NVEs are informed of preferences for which addresses to use for particular information on VNs or TS bindings. That preference information would be cached for future use to improve behavior, e.g., if all requests for a specific subset of VNs are forwarded to a specific NVA component, the NVE can optimize future requests within that subset by sending them directly to that NVA component via its address.

另一种可能性是将要使用哪个NVA地址的信息视为NVS处的缓存(软状态)信息,以便可以使用任何NVA地址来获取任何信息,但NVS会被告知用于VN或TS绑定的特定信息的地址的首选项。该偏好信息将被缓存以供将来用于改进行为,例如,如果针对特定VN子集的所有请求都被转发到特定NVA组件,则NVE可以通过其地址直接发送到该NVA组件来优化该子集内的未来请求。

8. NVE-NVA Protocol
8. NVE-NVA协议

As outlined in Section 4.3, an NVE needs certain information in order to perform its functions. To obtain such information from an NVA, an NVE-NVA protocol is needed. The NVE-NVA protocol provides two functions. First, it allows an NVE to obtain information about the location and status of other TSs with which it needs to communicate. Second, the NVE-NVA protocol provides a way for NVEs to provide updates to the NVA about the TSs attached to that NVE (e.g., when a TS attaches or detaches from the NVE) or about communication errors encountered when sending traffic to remote NVEs. For example, an NVE could indicate that a destination it is trying to reach at a destination NVE is unreachable for some reason.

如第4.3节所述,NVE需要某些信息才能执行其功能。要从NVA获取此类信息,需要NVE-NVA协议。NVE-NVA协议提供两个功能。首先,它允许NVE获取有关其需要与之通信的其他TSs的位置和状态的信息。其次,NVE-NVA协议为NVA提供了一种方法,用于向NVA提供有关附加到该NVE的TSs(例如,当TS附加或分离到NVE时)或向远程NVE发送流量时遇到的通信错误的更新。例如,一个NVE可能表示由于某种原因,它试图到达的目的地NVE无法到达。

While having a direct NVE-NVA protocol might seem straightforward, the existence of existing VM orchestration systems complicates the choices an NVE has for interacting with the NVA.

虽然拥有一个直接的NVE-NVA协议似乎很简单,但现有VM编排系统的存在使NVE与NVA交互的选择变得复杂。

8.1. NVE-NVA Interaction Models
8.1. NVE-NVA相互作用模型

An NVE interacts with an NVA in at least two (quite different) ways:

NVE与NVA的交互方式至少有两种(完全不同):

o NVEs embedded within the same server as the hypervisor can obtain necessary information entirely through the hypervisor-facing side of the NVE. Such an approach is a natural extension to existing VM orchestration systems supporting server virtualization because an existing protocol between the hypervisor and VM orchestration system already exists and can be leveraged to obtain any needed information. Specifically, VM orchestration systems used to create, terminate, and migrate VMs already use well-defined (though typically proprietary) protocols to handle the interactions between the hypervisor and VM orchestration system. For such systems, it is a natural extension to leverage the existing orchestration protocol as a sort of proxy protocol for handling the interactions between an NVE and the NVA. Indeed, existing implementations can already do this.

o 嵌入在与虚拟机监控程序相同的服务器中的NVE可以完全通过NVE面向虚拟机监控程序的一侧获得必要的信息。这种方法是对支持服务器虚拟化的现有VM编排系统的自然扩展,因为虚拟机监控程序和VM编排系统之间已经存在一个现有协议,可以利用该协议获取任何需要的信息。具体来说,用于创建、终止和迁移虚拟机的虚拟机编排系统已经使用定义良好的(尽管通常是专有的)协议来处理虚拟机监控程序和虚拟机编排系统之间的交互。对于这样的系统,利用现有的编排协议作为一种代理协议来处理NVE和NVA之间的交互是一种自然的扩展。事实上,现有的实现已经可以做到这一点。

o Alternatively, an NVE can obtain needed information by interacting directly with an NVA via a protocol operating over the data-center underlay network. Such an approach is needed to support NVEs that are not associated with systems performing server virtualization (e.g., as in the case of a standalone gateway) or where the NVE needs to communicate directly with the NVA for other reasons.

o 或者,NVE可以通过在数据中心底层网络上运行的协议直接与NVA交互来获得所需信息。需要这种方法来支持与执行服务器虚拟化的系统无关的NVE(例如,在独立网关的情况下),或者由于其他原因需要与NVA直接通信的NVE。

The NVO3 architecture will focus on support for the second model above. Existing virtualization environments are already using the first model, but they are not sufficient to cover the case of

NVO3体系结构将重点支持上述第二个模型。现有的虚拟化环境已经在使用第一种模型,但它们不足以涵盖以下情况:

standalone gateways -- such gateways may not support virtualization and do not interface with existing VM orchestration systems.

独立网关——此类网关可能不支持虚拟化,也不与现有VM编排系统接口。

8.2. Direct NVE-NVA Protocol
8.2. 直接NVE-NVA协议

An NVE can interact directly with an NVA via an NVE-NVA protocol. Such a protocol can be either independent of the NVA internal protocol or an extension of it. Using a purpose-specific protocol would provide architectural separation and independence between the NVE and NVA. The NVE and NVA interact in a well-defined way, and changes in the NVA (or NVE) do not need to impact each other. Using a dedicated protocol also ensures that both NVE and NVA implementations can evolve independently and without dependencies on each other. Such independence is important because the upgrade path for NVEs and NVAs is quite different. Upgrading all the NVEs at a site will likely be more difficult in practice than upgrading NVAs because of their large number -- one on each end device. In practice, it would be prudent to assume that once an NVE has been implemented and deployed, it may be challenging to get subsequent NVE extensions and changes implemented and deployed, whereas an NVA (and its associated internal protocols) is more likely to evolve over time as experience is gained from usage and upgrades will involve fewer nodes.

NVE可以通过NVE-NVA协议直接与NVA交互。这样的协议既可以独立于NVA内部协议,也可以是其扩展。使用特定目的协议将在NVE和NVA之间提供架构分离和独立性。NVE和NVA以明确定义的方式相互作用,NVA(或NVE)中的变化不需要相互影响。使用专用协议还可以确保NVE和NVA实现可以独立发展,并且彼此不依赖。这种独立性很重要,因为NVE和NVA的升级路径完全不同。实际上,升级一个站点上的所有NVE可能比升级NVA更困难,因为NVA的数量很大,每个终端设备上都有一个。在实践中,谨慎的做法是假设一旦实施和部署了NVE,实施和部署后续的NVE扩展和更改可能会很困难,而NVA(及其相关的内部协议)随着从使用中获得经验,随着时间的推移,更可能发生变化,升级将涉及更少的节点。

Requirements for a direct NVE-NVA protocol can be found in [NVE-NVA].

直接NVE-NVA协议的要求见[NVE-NVA]。

8.3. Propagating Information Between NVEs and NVAs
8.3. 在NVE和NVA之间传播信息

Information flows between NVEs and NVAs in both directions. The NVA maintains information about all VNs in the NV Domain so that NVEs do not need to do so themselves. NVEs obtain information from the NVA about where a given remote TS destination resides. NVAs, in turn, obtain information from NVEs about the individual TSs attached to those NVEs.

信息在NVE和NVA之间双向流动。NVA维护有关NV域中所有VN的信息,以便NVE不需要自己这样做。NVE从NVA获取有关给定远程TS目的地所在位置的信息。反过来,NVA从NVE获取有关附加到这些NVE的各个TSs的信息。

While the NVA could push information relevant to every virtual network to every NVE, such an approach scales poorly and is unnecessary. In practice, a given NVE will only need and want to know about VNs to which it is attached. Thus, an NVE should be able to subscribe to updates only for the virtual networks it is interested in receiving updates for. The NVO3 architecture supports a model where an NVE is not required to have full mapping tables for all virtual networks in an NV Domain.

虽然NVA可以将与每个虚拟网络相关的信息推送到每个NVE,但这种方法扩展性差,没有必要。实际上,给定的NVE只需要并希望了解它所连接的VN。因此,NVE应该能够仅为其感兴趣接收更新的虚拟网络订阅更新。NVO3体系结构支持一种模型,其中NVE不需要为NV域中的所有虚拟网络提供完整的映射表。

Before sending unicast traffic to a remote TS (or TSs for broadcast or multicast traffic), an NVE must know where the remote TS(s) currently reside. When a TS attaches to a virtual network, the NVE obtains information about that VN from the NVA. The NVA can provide

在将单播通信发送到远程TS(或用于广播或多播通信的TSs)之前,NVE必须知道远程TS当前驻留的位置。当TS连接到虚拟网络时,NVE从NVA获取关于该VN的信息。NVA可以提供

that information to the NVE at the time the TS attaches to the VN, either because the NVE requests the information when the attach operation occurs or because the VM orchestration system has initiated the attach operation and provides associated mapping information to the NVE at the same time.

在TS连接到VN时将该信息发送到NVE,这可能是因为NVE在连接操作发生时请求该信息,或者是因为VM编排系统已启动连接操作并同时向NVE提供关联的映射信息。

There are scenarios where an NVE may wish to query the NVA about individual mappings within a VN. For example, when sending traffic to a remote TS on a remote NVE, that TS may become unavailable (e.g., because it has migrated elsewhere or has been shut down, in which case the remote NVE may return an error indication). In such situations, the NVE may need to query the NVA to obtain updated mapping information for a specific TS or to verify that the information is still correct despite the error condition. Note that such a query could also be used by the NVA as an indication that there may be an inconsistency in the network and that it should take steps to verify that the information it has about the current state and location of a specific TS is still correct.

在某些情况下,NVE可能希望向NVA查询VN内的各个映射。例如,当向远程NVE上的远程TS发送通信量时,该TS可能变得不可用(例如,因为它已迁移到其他地方或已关闭,在这种情况下,远程NVE可能会返回错误指示)。在这种情况下,NVE可能需要查询NVA以获得特定TS的更新映射信息,或者验证尽管存在错误条件,该信息仍然正确。请注意,NVA也可以使用这种查询作为网络中可能存在不一致的指示,并应采取步骤验证其拥有的关于特定TS的当前状态和位置的信息仍然正确。

For very large virtual networks, the amount of state an NVE needs to maintain for a given virtual network could be significant. Moreover, an NVE may only be communicating with a small subset of the TSs on such a virtual network. In such cases, the NVE may find it desirable to maintain state only for those destinations it is actively communicating with. In such scenarios, an NVE may not want to maintain full mapping information about all destinations on a VN. However, if it needs to communicate with a destination for which it does not have mapping information, it will need to be able to query the NVA on demand for the missing information on a per-destination basis.

对于非常大的虚拟网络,NVE需要为给定虚拟网络维护的状态量可能非常大。此外,NVE可能仅与这种虚拟网络上的TSs的一小部分通信。在这种情况下,NVE可能会发现只需要为其正在积极通信的目的地保持状态是可取的。在这种情况下,NVE可能不希望维护VN上所有目的地的完整映射信息。但是,如果它需要与没有映射信息的目的地通信,它需要能够根据每个目的地的需要查询NVA缺少的信息。

The NVO3 architecture will need to support a range of operations between the NVE and NVA. Requirements for those operations can be found in [NVE-NVA].

NVO3体系结构需要支持NVE和NVA之间的一系列操作。这些操作的要求见[NVE-NVA]。

9. Federated NVAs
9. 联邦NVA

An NVA provides service to the set of NVEs in its NV Domain. Each NVA manages network virtualization information for the virtual networks within its NV Domain. An NV Domain is administered by a single entity.

NVA向其NV域中的一组NVE提供服务。每个NVA管理其NV域内虚拟网络的网络虚拟化信息。NV域由单个实体管理。

In some cases, it will be necessary to expand the scope of a specific VN or even an entire NV Domain beyond a single NVA. For example, an administrator managing multiple data centers may wish to operate all of its data centers as a single NV Region. Such cases are handled by having different NVAs peer with each other to exchange mapping information about specific VNs. NVAs operate in a federated manner

在某些情况下,有必要将特定VN甚至整个NV域的范围扩展到单个NVA之外。例如,管理多个数据中心的管理员可能希望将其所有数据中心作为单个NV区域进行操作。这种情况是通过让不同的NVA彼此对等以交换关于特定VN的映射信息来处理的。NVA以联邦方式运行

with a set of NVAs operating as a loosely coupled federation of individual NVAs. If a virtual network spans multiple NVAs (e.g., located at different data centers), and an NVE needs to deliver tenant traffic to an NVE that is part of a different NV Domain, it still interacts only with its NVA, even when obtaining mappings for NVEs associated with a different NV Domain.

一组NVA作为单个NVA的松散耦合联合体运行。如果虚拟网络跨越多个NVA(例如,位于不同的数据中心),并且NVE需要向属于不同NV域一部分的NVE提供租户流量,则即使在获取与不同NV域相关联的NVE映射时,它仍然仅与其NVA交互。

Figure 3 shows a scenario where two separate NV Domains (A and B) share information about a VN. VM1 and VM2 both connect to the same VN, even though the two VMs are in separate NV Domains. There are two cases to consider. In the first case, NV Domain B does not allow NVE-A to tunnel traffic directly to NVE-B. There could be a number of reasons for this. For example, NV Domains A and B may not share a common address space (i.e., traversal through a NAT device is required), or for policy reasons, a domain might require that all traffic between separate NV Domains be funneled through a particular device (e.g., a firewall). In such cases, NVA-2 will advertise to NVA-1 that VM1 on the VN is available and direct that traffic between the two nodes be forwarded via IP-G (an IP Gateway). IP-G would then decapsulate received traffic from one NV Domain, translate it appropriately for the other domain, and re-encapsulate the packet for delivery.

图3显示了一个场景,其中两个独立的NV域(a和B)共享关于VN的信息。VM1和VM2都连接到同一个VN,即使这两个VM位于不同的NV域中。有两种情况需要考虑。在第一种情况下,NV域B不允许NVE-A将流量直接隧道到NVE-B。原因可能有很多。例如,NV域A和B可能不共享公共地址空间(即,需要通过NAT设备进行遍历),或者出于策略原因,域可能要求单独NV域之间的所有流量通过特定设备(例如,防火墙)进行漏斗式传输。在这种情况下,NVA-2将向NVA-1通告VN上的VM1可用,并指示两个节点之间的流量通过IP-G(IP网关)转发。然后,IP-G将对从一个NV域接收到的流量进行去封装,将其适当地转换到另一个域,并重新封装该数据包以进行传输。

                    xxxxxx                          xxxx        +-----+
   +-----+     xxxxxx    xxxxxx               xxxxxx    xxxxx   | VM2 |
   | VM1 |    xx              xx            xxx             xx  |-----|
   |-----|   xx                x          xx                 x  |NVE-B|
   |NVE-A|   x                 x  +----+  x                   x +-----+
   +--+--+   x   NV Domain A   x  |IP-G|--x                    x    |
      +-------x               xx--+    | x                     xx   |
              x              x    +----+ x     NV Domain B      x   |
           +---x           xx            xx                     x---+
           |    xxxx      xx           +->xx                   xx
           |       xxxxxxxx            |   xx                 xx
       +---+-+                         |     xx              xx
       |NVA-1|                      +--+--+    xx         xxx
       +-----+                      |NVA-2|     xxxx   xxxx
                                    +-----+        xxxxx
        
                    xxxxxx                          xxxx        +-----+
   +-----+     xxxxxx    xxxxxx               xxxxxx    xxxxx   | VM2 |
   | VM1 |    xx              xx            xxx             xx  |-----|
   |-----|   xx                x          xx                 x  |NVE-B|
   |NVE-A|   x                 x  +----+  x                   x +-----+
   +--+--+   x   NV Domain A   x  |IP-G|--x                    x    |
      +-------x               xx--+    | x                     xx   |
              x              x    +----+ x     NV Domain B      x   |
           +---x           xx            xx                     x---+
           |    xxxx      xx           +->xx                   xx
           |       xxxxxxxx            |   xx                 xx
       +---+-+                         |     xx              xx
       |NVA-1|                      +--+--+    xx         xxx
       +-----+                      |NVA-2|     xxxx   xxxx
                                    +-----+        xxxxx
        

Figure 3: VM1 and VM2 in Different NV Domains

图3:不同NV域中的VM1和VM2

NVAs at one site share information and interact with NVAs at other sites, but only in a controlled manner. It is expected that policy and access control will be applied at the boundaries between different sites (and NVAs) so as to minimize dependencies on external NVAs that could negatively impact the operation within a site. It is an architectural principle that operations involving NVAs at one site not be immediately impacted by failures or errors at another site.

一个站点上的NVA共享信息并与其他站点上的NVA交互,但仅以受控方式。预计政策和访问控制将应用于不同站点(和NVA)之间的边界,以尽量减少对外部NVA的依赖,从而对站点内的运营产生负面影响。体系结构原则是,一个站点上涉及NVA的操作不会立即受到另一个站点上的故障或错误的影响。

(Of course, communication between NVEs in different NV Domains may be impacted by such failures or errors.) It is a strong requirement that an NVA continue to operate properly for local NVEs even if external communication is interrupted (e.g., should communication between a local and remote NVA fail).

(当然,不同NV域中的NV之间的通信可能会受到此类故障或错误的影响。)强烈要求即使外部通信中断(例如,如果本地和远程NVA之间的通信出现故障),NVA仍能为本地NV正常运行。

At a high level, a federation of interconnected NVAs has some analogies to BGP and Autonomous Systems. Like an Autonomous System, NVAs at one site are managed by a single administrative entity and do not interact with external NVAs except as allowed by policy. Likewise, the interface between NVAs at different sites is well defined so that the internal details of operations at one site are largely hidden to other sites. Finally, an NVA only peers with other NVAs that it has a trusted relationship with, i.e., where a VN is intended to span multiple NVAs.

在高层次上,互联NVA的联盟与BGP和自治系统有一些相似之处。与自治系统一样,一个站点上的NVA由单个管理实体管理,除非策略允许,否则不会与外部NVA交互。同样,不同站点的NVA之间的接口定义良好,因此一个站点的内部操作细节在很大程度上对其他站点隐藏。最后,一个NVA只与它具有信任关系的其他NVA对等,即,一个VN打算跨越多个NVA。

Reasons for using a federated model include:

使用联合模型的原因包括:

o Provide isolation among NVAs operating at different sites at different geographic locations.

o 在不同地点、不同地理位置运行的NVA之间提供隔离。

o Control the quantity and rate of information updates that flow (and must be processed) between different NVAs in different data centers.

o 控制在不同数据中心的不同NVA之间流动(且必须处理)的信息更新的数量和速率。

o Control the set of external NVAs (and external sites) a site peers with. A site will only peer with other sites that are cooperating in providing an overlay service.

o 控制与站点对等的一组外部NVA(和外部站点)。一个站点将只与正在合作提供覆盖服务的其他站点进行对等。

o Allow policy to be applied between sites. A site will want to carefully control what information it exports (and to whom) as well as what information it is willing to import (and from whom).

o 允许在站点之间应用策略。一个站点需要小心地控制它输出什么信息(和输出给谁)以及它愿意输入什么信息(和从谁那里)。

o Allow different protocols and architectures to be used for intra-NVA vs. inter-NVA communication. For example, within a single data center, a replicated transaction server using database techniques might be an attractive implementation option for an NVA, and protocols optimized for intra-NVA communication would likely be different from protocols involving inter-NVA communication between different sites.

o 允许不同的协议和架构用于NVA内部和NVA内部通信。例如,在单个数据中心内,使用数据库技术的复制事务服务器可能是NVA的一个有吸引力的实现选项,并且为NVA内部通信优化的协议可能不同于涉及不同站点之间NVA内部通信的协议。

o Allow for optimized protocols rather than using a one-size-fits-all approach. Within a data center, networks tend to have lower latency, higher speed, and higher redundancy when compared with WAN links interconnecting data centers. The design constraints and trade-offs for a protocol operating within a data-center network are different from those operating over WAN links. While a single protocol could be used for both cases, there could be

o 允许优化协议,而不是使用一刀切的方法。在数据中心内,与连接数据中心的WAN链路相比,网络往往具有更低的延迟、更高的速度和更高的冗余。在数据中心网络内运行的协议的设计约束和权衡与在WAN链路上运行的协议不同。虽然两种情况下都可以使用单一协议,但可能存在

advantages to using different and more specialized protocols for the intra- and inter-NVA case.

在NVA内部和内部情况下使用不同和更专门的协议的优势。

9.1. Inter-NVA Peering
9.1. 跨NVA对等

To support peering between different NVAs, an inter-NVA protocol is needed. The inter-NVA protocol defines what information is exchanged between NVAs. It is assumed that the protocol will be used to share addressing information between data centers and must scale well over WAN links.

为了支持不同NVA之间的对等,需要一个NVA间协议。NVA间协议定义了NVA之间交换的信息。假定该协议将用于在数据中心之间共享寻址信息,并且必须在WAN链路上扩展。

10. Control Protocol Work Areas
10. 控制协议工作区

The NVO3 architecture consists of two major distinct entities: NVEs and NVAs. In order to provide isolation and independence between these two entities, the NVO3 architecture calls for well-defined protocols for interfacing between them. For an individual NVA, the architecture calls for a logically centralized entity that could be implemented in a distributed or replicated fashion. While the IETF may choose to define one or more specific architectural approaches to building individual NVAs, there is little need to pick exactly one approach to the exclusion of others. An NVA for a single domain will likely be deployed as a single vendor product; thus, there is little benefit in standardizing the internal structure of an NVA.

NVO3体系结构由两个主要的不同实体组成:NVE和NVA。为了在这两个实体之间提供隔离和独立性,NVO3体系结构要求为它们之间的接口提供定义良好的协议。对于单个NVA,该体系结构需要一个逻辑上集中的实体,该实体可以以分布式或复制的方式实现。虽然IETF可以选择定义一种或多种特定的体系结构方法来构建单个NVA,但几乎不需要选择一种方法来排除其他方法。单个域的NVA可能会部署为单个供应商产品;因此,标准化NVA的内部结构没有什么好处。

Individual NVAs peer with each other in a federated manner. The NVO3 architecture calls for a well-defined interface between NVAs.

单个NVA以联合方式彼此对等。NVO3体系结构要求NVA之间有一个定义良好的接口。

Finally, a hypervisor-NVE protocol is needed to cover the split-NVE scenario described in Section 4.2.

最后,需要一个虚拟机监控程序NVE协议来覆盖第4.2节中描述的拆分NVE场景。

11. NVO3 Data-Plane Encapsulation
11. NVO3数据平面封装

When tunneling tenant traffic, NVEs add an encapsulation header to the original tenant packet. The exact encapsulation to use for NVO3 does not seem to be critical. The main requirement is that the encapsulation support a Context ID of sufficient size. A number of encapsulations already exist that provide a VN Context of sufficient size for NVO3. For example, Virtual eXtensible Local Area Network (VXLAN) [RFC7348] has a 24-bit VXLAN Network Identifier (VNI). Network Virtualization using Generic Routing Encapsulation (NVGRE) [RFC7637] has a 24-bit Tenant Network ID (TNI). MPLS-over-GRE provides a 20-bit label field. While there is widespread recognition that a 12-bit VN Context would be too small (only 4096 distinct values), it is generally agreed that 20 bits (1 million distinct values) and 24 bits (16.8 million distinct values) are sufficient for a wide variety of deployment scenarios.

当隧道租户流量时,NVE会向原始租户数据包添加一个封装头。用于NVO3的确切封装似乎并不重要。主要要求是封装支持足够大的上下文ID。许多封装已经存在,它们为NVO3提供了足够大的VN上下文。例如,虚拟可扩展局域网(VXLAN)[RFC7348]具有24位VXLAN网络标识符(VNI)。使用通用路由封装(NVGRE)[RFC7637]的网络虚拟化具有24位租户网络ID(TNI)。MPLS over GRE提供了一个20位的标签字段。虽然人们普遍认为12位VN上下文太小(只有4096个不同的值),但普遍认为20位(100万个不同的值)和24位(1680万个不同的值)足以满足各种部署场景。

12. Operations, Administration, and Maintenance (OAM)
12. 运营、管理和维护(OAM)

The simplicity of operating and debugging overlay networks will be critical for successful deployment.

操作和调试覆盖网络的简单性对于成功部署至关重要。

Overlay networks are based on tunnels between NVEs, so the Operations, Administration, and Maintenance (OAM) [RFC6291] framework for overlay networks can draw from prior IETF OAM work for tunnel-based networks, specifically L2VPN OAM [RFC6136]. RFC 6136 focuses on Fault Management and Performance Management as fundamental to L2VPN service delivery, leaving the Configuration Management, Accounting Management, and Security Management components of the Open Systems Interconnection (OSI) Fault, Configuration, Accounting, Performance, and Security (FCAPS) taxonomy [M.3400] for further study. This section does likewise for NVO3 OAM, but those three areas continue to be important parts of complete OAM functionality for NVO3.

覆盖网络基于NVE之间的隧道,因此覆盖网络的操作、管理和维护(OAM)[RFC6291]框架可以借鉴先前IETF针对隧道网络的OAM工作,特别是L2VPN OAM[RFC6136]。RFC 6136将故障管理和性能管理作为L2VPN服务交付的基础,将开放系统互连(OSI)故障、配置、计费、性能和安全(FCAPS)分类法[M.3400]中的配置管理、计费管理和安全管理组件留待进一步研究。本节同样适用于NVO3 OAM,但这三个方面仍然是NVO3完整OAM功能的重要组成部分。

The relationship between the overlay and underlay networks is a consideration for fault and performance management -- a fault in the underlay may manifest as fault and/or performance issues in the overlay. Diagnosing and fixing such issues are complicated by NVO3 abstracting the underlay network away from the overlay network (e.g., intermediate nodes on the underlay network path between NVEs are hidden from overlay VNs).

覆盖和参考底图网络之间的关系是故障和性能管理的考虑因素——参考底图中的故障可能表现为覆盖中的故障和/或性能问题。NVO3将参考底图网络从覆盖网络中抽象出来(例如,NVE之间的参考底图网络路径上的中间节点对覆盖VN隐藏),从而使此类问题的诊断和修复变得复杂。

NVO3-specific OAM techniques, protocol constructs, and tools are needed to provide visibility beyond this abstraction to diagnose and correct problems that appear in the overlay. Two examples are underlay-aware traceroute [TRACEROUTE-VXLAN] and ping protocol constructs for overlay networks [VXLAN-FAILURE] [NVO3-OVERLAY].

需要特定于NVO3的OAM技术、协议构造和工具来提供此抽象之外的可见性,以诊断和纠正覆盖中出现的问题。两个示例是支持底层的traceroute[traceroute-VXLAN]和覆盖网络的ping协议构造[VXLAN-FAILURE][NVO3-overlay]。

NVO3-specific tools and techniques are best viewed as complements to (i.e., not as replacements for) single-network tools that apply to the overlay and/or underlay networks. Coordination among the individual network tools (for the overlay and underlay networks) and NVO3-aware, dual-network tools is required to achieve effective monitoring and fault diagnosis. For example, the defect detection intervals and performance measurement intervals ought to be coordinated among all tools involved in order to provide consistency and comparability of results.

NVO3特定工具和技术最好被视为适用于覆盖和/或底层网络的单一网络工具的补充(即,不是替代)。为了实现有效的监控和故障诊断,需要在单个网络工具(覆盖和底层网络)和NVO3感知、双网络工具之间进行协调。例如,应在所有相关工具之间协调缺陷检测间隔和性能测量间隔,以提供结果的一致性和可比性。

For further discussion of NVO3 OAM requirements, see [NVO3-OAM].

有关NVO3 OAM要求的进一步讨论,请参阅[NVO3-OAM]。

13. Summary
13. 总结

This document presents the overall architecture for NVO3. The architecture calls for three main areas of protocol work:

本文档介绍了NVO3的总体架构。该体系结构要求协议工作的三个主要方面:

1. A hypervisor-NVE protocol to support split-NVEs as discussed in Section 4.2

1. 如第4.2节所述,支持拆分NVE的虚拟机监控程序NVE协议

2. An NVE-NVA protocol for disseminating VN information (e.g., inner to outer address mappings)

2. 用于传播VN信息(例如,内部到外部地址映射)的NVE-NVA协议

3. An NVA-NVA protocol for exchange of information about specific virtual networks between federated NVAs

3. 一种NVA-NVA协议,用于在联邦NVA之间交换有关特定虚拟网络的信息

It should be noted that existing protocols or extensions of existing protocols are applicable.

应注意,现有协议或现有协议的扩展是适用的。

14. Security Considerations
14. 安全考虑

The data plane and control plane described in this architecture will need to address potential security threats.

此体系结构中描述的数据平面和控制平面需要解决潜在的安全威胁。

For the data plane, tunneled application traffic may need protection against being misdelivered, being modified, or having its content exposed to an inappropriate third party. In all cases, encryption between authenticated tunnel endpoints (e.g., via use of IPsec [RFC4301]) and enforcing policies that control which endpoints and VNs are permitted to exchange traffic can be used to mitigate risks.

对于数据平面,隧道式应用程序通信可能需要保护,以防误发、修改或将其内容暴露给不适当的第三方。在所有情况下,可以使用经过身份验证的隧道端点之间的加密(例如,通过使用IPsec[RFC4301])和强制执行控制允许哪些端点和VN交换流量的策略来降低风险。

For the control plane, a combination of authentication and encryption can be used between NVAs, between the NVA and NVE, as well as between different components of the split-NVE approach. All entities will need to properly authenticate with each other and enable encryption for their interactions as appropriate to protect sensitive information.

对于控制平面,可以在NVA之间、NVA和NVE之间以及拆分NVE方法的不同组件之间使用身份验证和加密的组合。所有实体都需要正确地相互验证,并根据需要为其交互启用加密,以保护敏感信息。

Leakage of sensitive information about users or other entities associated with VMs whose traffic is virtualized can also be covered by using encryption for the control-plane protocols and enforcing policies that control which NVO3 components are permitted to exchange control-plane traffic.

通过对控制平面协议使用加密,并强制执行控制允许哪些NVO3组件交换控制平面流量的策略,还可以覆盖与虚拟机相关联的用户或其他实体的敏感信息泄漏。

Control-plane elements such as NVEs and NVAs need to collect performance and other data in order to carry out their functions. This data can sometimes be unexpectedly sensitive, for example, allowing non-obvious inferences of activity within a VM. This provides a reason to minimize the data collected in some environments in order to limit potential exposure of sensitive information. As

控制平面元素(如NVE和NVA)需要收集性能和其他数据以执行其功能。这些数据有时可能会异常敏感,例如,允许对VM内的活动进行不明显的推断。这为减少在某些环境中收集的数据以限制敏感信息的潜在暴露提供了理由。像

noted briefly in RFC 6973 [RFC6973] and RFC 7258 [RFC7258], there is an inevitable tension between being privacy sensitive and taking into account network operations in NVO3 protocol development.

RFC 6973[RFC6973]和RFC 7258[RFC7258]中简要指出,在NVO3协议开发中,隐私敏感和考虑网络操作之间存在不可避免的紧张关系。

See the NVO3 framework security considerations in RFC 7365 [RFC7365] for further discussion.

请参阅RFC 7365[RFC7365]中的NVO3框架安全注意事项以了解更多讨论。

15. Informative References
15. 资料性引用

[FRAMEWORK-MCAST] Ghanwani, A., Dunbar, L., McBride, M., Bannai, V., and R. Krishnan, "A Framework for Multicast in Network Virtualization Overlays", Work in Progress, draft-ietf-nvo3-mcast-framework-05, May 2016.

[FRAMEWORK-MCAST]Ghanwani,A.,Dunbar,L.,McBride,M.,Bannai,V.,和R.Krishnan,“网络虚拟化覆盖中的多播框架”,正在进行的工作,草稿-ietf-nvo3-MCAST-FRAMEWORK-05,2016年5月。

[IEEE.802.1Q] IEEE, "IEEE Standard for Local and metropolitan area networks--Bridges and Bridged Networks", IEEE 802.1Q-2014, DOI 10.1109/ieeestd.2014.6991462, <http://ieeexplore.ieee.org/servlet/ opac?punumber=6991460>.

[IEEE.802.1Q]IEEE,“局域网和城域网的IEEE标准——网桥和桥接网络”,IEEE 802.1Q-2014,DOI 10.1109/ieeestd.2014.6991462<http://ieeexplore.ieee.org/servlet/ opac?punumber=6991460>。

[M.3400] ITU-T, "TMN management functions", ITU-T Recommendation M.3400, February 2000, <https://www.itu.int/rec/T-REC-M.3400-200002-I/>.

[M.3400]ITU-T,“TMN管理功能”,ITU-T建议M.3400,2000年2月<https://www.itu.int/rec/T-REC-M.3400-200002-I/>.

[NVE-NVA] Kreeger, L., Dutt, D., Narten, T., and D. Black, "Network Virtualization NVE to NVA Control Protocol Requirements", Work in Progress, draft-ietf-nvo3-nve-nva-cp-req-05, March 2016.

[NVE-NVA]Kreeger,L.,Dutt,D.,Narten,T.,和D.Black,“网络虚拟化NVE到NVA控制协议要求”,正在进行的工作,草稿-ietf-nvo3-NVE-NVA-cp-req-052016年3月。

[NVO3-OAM] Chen, H., Ed., Ashwood-Smith, P., Xia, L., Iyengar, R., Tsou, T., Sajassi, A., Boucadair, M., Jacquenet, C., Daikoku, M., Ghanwani, A., and R. Krishnan, "NVO3 Operations, Administration, and Maintenance Requirements", Work in Progress, draft-ashwood-nvo3-oam-requirements-04, October 2015.

[NVO3-OAM]Chen,H.,Ed.,Ashwood Smith,P.,Xia,L.,Iyengar,R.,Tsou,T.,Sajassi,A.,Boucadair,M.,Jacquenet,C.,Daikoku,M.,Ghanwani,A.,和R.Krishnan,“NVO3运营、管理和维护要求”,在建工程,草案-Ashwood-NVO3-OAM-Requirements-042015年10月。

[NVO3-OVERLAY] Kumar, N., Pignataro, C., Rao, D., and S. Aldrin, "Detecting NVO3 Overlay Data Plane failures", Work in Progress, draft-kumar-nvo3-overlay-ping-01, January 2014.

[NVO3-OVERLAY]Kumar,N.,Pignataro,C.,Rao,D.,和S.Aldrin,“检测NVO3覆盖数据平面故障”,正在进行的工作,草稿-Kumar-NVO3-OVERLAY-ping-012014年1月。

[RFC826] Plummer, D., "Ethernet Address Resolution Protocol: Or Converting Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware", STD 37, RFC 826, DOI 10.17487/RFC0826, November 1982, <http://www.rfc-editor.org/info/rfc826>.

[RFC826]Plummer,D.,“以太网地址解析协议:或将网络协议地址转换为48位以太网地址以便在以太网硬件上传输”,STD 37,RFC 826,DOI 10.17487/RFC0826,1982年11月<http://www.rfc-editor.org/info/rfc826>.

[RFC4301] Kent, S. and K. Seo, "Security Architecture for the Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, December 2005, <http://www.rfc-editor.org/info/rfc4301>.

[RFC4301]Kent,S.和K.Seo,“互联网协议的安全架构”,RFC 4301,DOI 10.17487/RFC4301,2005年12月<http://www.rfc-editor.org/info/rfc4301>.

[RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private Networks (VPNs)", RFC 4364, DOI 10.17487/RFC4364, February 2006, <http://www.rfc-editor.org/info/rfc4364>.

[RFC4364]Rosen,E.和Y.Rekhter,“BGP/MPLS IP虚拟专用网络(VPN)”,RFC 4364,DOI 10.17487/RFC4364,2006年2月<http://www.rfc-editor.org/info/rfc4364>.

[RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, DOI 10.17487/RFC4861, September 2007, <http://www.rfc-editor.org/info/rfc4861>.

[RFC4861]Narten,T.,Nordmark,E.,Simpson,W.,和H.Soliman,“IP版本6(IPv6)的邻居发现”,RFC 4861,DOI 10.17487/RFC48612007年9月<http://www.rfc-editor.org/info/rfc4861>.

[RFC6136] Sajassi, A., Ed. and D. Mohan, Ed., "Layer 2 Virtual Private Network (L2VPN) Operations, Administration, and Maintenance (OAM) Requirements and Framework", RFC 6136, DOI 10.17487/RFC6136, March 2011, <http://www.rfc-editor.org/info/rfc6136>.

[RFC6136]Sajassi,A.,Ed.和D.Mohan,Ed.,“第二层虚拟专用网络(L2VPN)运营、管理和维护(OAM)要求和框架”,RFC 6136,DOI 10.17487/RFC6136,2011年3月<http://www.rfc-editor.org/info/rfc6136>.

[RFC6291] Andersson, L., van Helvoort, H., Bonica, R., Romascanu, D., and S. Mansfield, "Guidelines for the Use of the "OAM" Acronym in the IETF", BCP 161, RFC 6291, DOI 10.17487/RFC6291, June 2011, <http://www.rfc-editor.org/info/rfc6291>.

[RFC6291]Andersson,L.,van Helvoort,H.,Bonica,R.,Romascanu,D.,和S.Mansfield,“IETF中“OAM”首字母缩写词的使用指南”,BCP 161,RFC 6291,DOI 10.17487/RFC6291,2011年6月<http://www.rfc-editor.org/info/rfc6291>.

[RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., Morris, J., Hansen, M., and R. Smith, "Privacy Considerations for Internet Protocols", RFC 6973, DOI 10.17487/RFC6973, July 2013, <http://www.rfc-editor.org/info/rfc6973>.

[RFC6973]Cooper,A.,Tschofenig,H.,Aboba,B.,Peterson,J.,Morris,J.,Hansen,M.,和R.Smith,“互联网协议的隐私考虑”,RFC 6973,DOI 10.17487/RFC6973,2013年7月<http://www.rfc-editor.org/info/rfc6973>.

[RFC7258] Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 2014, <http://www.rfc-editor.org/info/rfc7258>.

[RFC7258]Farrell,S.和H.Tschofenig,“普遍监控是一种攻击”,BCP 188,RFC 7258,DOI 10.17487/RFC7258,2014年5月<http://www.rfc-editor.org/info/rfc7258>.

[RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, L., Sridhar, T., Bursell, M., and C. Wright, "Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, <http://www.rfc-editor.org/info/rfc7348>.

[RFC7348]Mahalingam,M.,Dutt,D.,Duda,K.,Agarwal,P.,Kreeger,L.,Sridhar,T.,Bursell,M.,和C.Wright,“虚拟可扩展局域网(VXLAN):在第3层网络上覆盖虚拟化第2层网络的框架”,RFC 7348,DOI 10.17487/RFC7348,2014年8月<http://www.rfc-editor.org/info/rfc7348>.

[RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., Kreeger, L., and M. Napierala, "Problem Statement: Overlays for Network Virtualization", RFC 7364, DOI 10.17487/RFC7364, October 2014, <http://www.rfc-editor.org/info/rfc7364>.

[RFC7364]Narten,T.,Ed.,Gray,E.,Ed.,Black,D.,Fang,L.,Kreeger,L.,和M.Napierala,“问题陈述:网络虚拟化覆盖”,RFC 7364,DOI 10.17487/RFC7364,2014年10月<http://www.rfc-editor.org/info/rfc7364>.

[RFC7365] Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. Rekhter, "Framework for Data Center (DC) Network Virtualization", RFC 7365, DOI 10.17487/RFC7365, October 2014, <http://www.rfc-editor.org/info/rfc7365>.

[RFC7365]Lasserre,M.,Balus,F.,Morin,T.,Bitar,N.,和Y.Rekhter,“数据中心(DC)网络虚拟化框架”,RFC 7365,DOI 10.17487/RFC7365,2014年10月<http://www.rfc-editor.org/info/rfc7365>.

[RFC7637] Garg, P., Ed. and Y. Wang, Ed., "NVGRE: Network Virtualization Using Generic Routing Encapsulation", RFC 7637, DOI 10.17487/RFC7637, September 2015, <http://www.rfc-editor.org/info/rfc7637>.

[RFC7637]Garg,P.,Ed.和Y.Wang,Ed.,“NVGRE:使用通用路由封装的网络虚拟化”,RFC 7637,DOI 10.17487/RFC7637,2015年9月<http://www.rfc-editor.org/info/rfc7637>.

[TRACEROUTE-VXLAN] Nordmark, E., Appanna, C., Lo, A., Boutros, S., and A. Dubey, "Layer-Transcending Traceroute for Overlay Networks like VXLAN", Work in Progress, draft-nordmark-nvo3- transcending-traceroute-03, July 2016.

[TRACEROUTE-VXLAN]Nordmark,E.,Appanna,C.,Lo,A.,Boutros,S.,和A.Dubey,“VXLAN等覆盖网络的层超越TRACEROUTE”,正在进行的工作,草稿-Nordmark-nvo3-超越-TRACEROUTE-03,2016年7月。

[USECASES] Yong, L., Dunbar, L., Toy, M., Isaac, A., and V. Manral, "Use Cases for Data Center Network Virtualization Overlay Networks", Work in Progress, draft-ietf-nvo3-use-case-15, December 2016.

[USECASES]Yong,L.,Dunbar,L.,Toy,M.,Isaac,A.,和V.Manral,“数据中心网络虚拟化覆盖网络的用例”,正在进行的工作,草稿-ietf-nvo3-Use-case-152016年12月。

[VXLAN-FAILURE] Jain, P., Singh, K., Balus, F., Henderickx, W., and V. Bannai, "Detecting VXLAN Segment Failure", Work in Progress, draft-jain-nvo3-vxlan-ping-00, June 2013.

[VXLAN-FAILURE]Jain,P.,Singh,K.,Balus,F.,Henderickx,W.,和V.Bannai,“检测VXLAN段故障”,在建工程,草案-Jain-nvo3-VXLAN-ping-00,2013年6月。

Acknowledgements

致谢

Helpful comments and improvements to this document have come from Alia Atlas, Abdussalam Baryun, Spencer Dawkins, Linda Dunbar, Stephen Farrell, Anton Ivanov, Lizhong Jin, Suresh Krishnan, Mirja Kuehlwind, Greg Mirsky, Carlos Pignataro, Dennis (Xiaohong) Qin, Erik Smith, Takeshi Takahashi, Ziye Yang, and Lucy Yong.

Alia Atlas、Abdusalam Baryun、Spencer Dawkins、Linda Dunbar、Stephen Farrell、Anton Ivanov、Lizhong Jin、Suresh Krishnan、Mirja Kuehlwind、Greg Mirsky、Carlos Pignataro、Dennis(Xiaohong)Qin、Erik Smith、Takeshi Takahashi、Ziye Yang和Lucy Yong对本文件进行了有益的评论和改进。

Authors' Addresses

作者地址

David Black Dell EMC

David Black戴尔EMC

   Email: david.black@dell.com
        
   Email: david.black@dell.com
        

Jon Hudson Independent

乔恩·哈德逊独立报

   Email: jon.hudson@gmail.com
        
   Email: jon.hudson@gmail.com
        

Lawrence Kreeger Independent

劳伦斯·克雷格独立报

   Email: lkreeger@gmail.com
        
   Email: lkreeger@gmail.com
        

Marc Lasserre Independent

马克·拉塞尔独立报

   Email: mmlasserre@gmail.com
        
   Email: mmlasserre@gmail.com
        

Thomas Narten IBM

托马斯·纳腾IBM

   Email: narten@us.ibm.com
        
   Email: narten@us.ibm.com