Internet Engineering Task Force (IETF)                    O. Bonaventure
Request for Comments: 8041                                     UCLouvain
Category: Informational                                        C. Paasch
ISSN: 2070-1721                                              Apple, Inc.
                                                                G. Detal
                                                                Tessares
                                                            January 2017
        
Internet Engineering Task Force (IETF)                    O. Bonaventure
Request for Comments: 8041                                     UCLouvain
Category: Informational                                        C. Paasch
ISSN: 2070-1721                                              Apple, Inc.
                                                                G. Detal
                                                                Tessares
                                                            January 2017
        

Use Cases and Operational Experience with Multipath TCP

多路径TCP的用例和操作经验

Abstract

摘要

This document discusses both use cases and operational experience with Multipath TCP (MPTCP) in real networks. It lists several prominent use cases where Multipath TCP has been considered and is being used. It also gives insight to some heuristics and decisions that have helped to realize these use cases and suggests possible improvements.

本文档讨论实际网络中多路径TCP(MPTCP)的用例和操作经验。它列出了已经考虑并正在使用多路径TCP的几个重要用例。它还提供了一些启发和决策的见解,这些启发和决策帮助实现了这些用例,并提出了可能的改进建议。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 7841.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 7841第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc8041.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc8041.

Copyright Notice

版权公告

Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2017 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1. Introduction ....................................................3
   2. Use Cases .......................................................4
      2.1. Datacenters ................................................4
      2.2. Cellular/WiFi Offload ......................................5
      2.3. Multipath TCP Proxies ......................................8
   3. Operational Experience ..........................................9
      3.1. Middlebox Interference .....................................9
      3.2. Congestion Control ........................................11
      3.3. Subflow Management ........................................12
      3.4. Implemented Subflow Managers ..............................13
      3.5. Subflow Destination Port ..................................15
      3.6. Closing Subflows ..........................................16
      3.7. Packet Schedulers .........................................17
      3.8. Segment Size Selection ....................................18
      3.9. Interactions with the Domain Name System ..................19
      3.10. Captive Portals ..........................................20
      3.11. Stateless Webservers .....................................20
      3.12. Load-Balanced Server Farms ...............................21
   4. Security Considerations ........................................21
   5. References .....................................................23
      5.1. Normative References ......................................23
      5.2. Informative References ....................................23
   Acknowledgements ..................................................30
   Authors' Addresses ................................................30
        
   1. Introduction ....................................................3
   2. Use Cases .......................................................4
      2.1. Datacenters ................................................4
      2.2. Cellular/WiFi Offload ......................................5
      2.3. Multipath TCP Proxies ......................................8
   3. Operational Experience ..........................................9
      3.1. Middlebox Interference .....................................9
      3.2. Congestion Control ........................................11
      3.3. Subflow Management ........................................12
      3.4. Implemented Subflow Managers ..............................13
      3.5. Subflow Destination Port ..................................15
      3.6. Closing Subflows ..........................................16
      3.7. Packet Schedulers .........................................17
      3.8. Segment Size Selection ....................................18
      3.9. Interactions with the Domain Name System ..................19
      3.10. Captive Portals ..........................................20
      3.11. Stateless Webservers .....................................20
      3.12. Load-Balanced Server Farms ...............................21
   4. Security Considerations ........................................21
   5. References .....................................................23
      5.1. Normative References ......................................23
      5.2. Informative References ....................................23
   Acknowledgements ..................................................30
   Authors' Addresses ................................................30
        
1. Introduction
1. 介绍

Multipath TCP was specified in [RFC6824] and five independent implementations have been developed. As of November 2016, Multipath TCP has been or is being implemented on the following platforms:

[RFC6824]中规定了多路径TCP,并开发了五种独立的实现。截至2016年11月,多路径TCP已经或正在以下平台上实施:

o Linux kernel [MultipathTCP-Linux]

o Linux内核[多路径TCP Linux]

o Apple iOS and macOS

o 苹果iOS和macOS

o Citrix load balancers

o Citrix负载平衡器

o FreeBSD [FreeBSD-MPTCP]

o FreeBSD[FreeBSD MPTCP]

o Oracle Solaris

o Oracle Solaris

The first three implementations are known to interoperate. Three of these implementations are open source (Linux kernel, FreeBSD and Apple's iOS and macOS). Apple's implementation is widely deployed.

前三种实现是互操作的。其中三种实现是开源的(Linux内核、FreeBSD和苹果的iOS和macOS)。苹果的实现被广泛部署。

Since the publication of [RFC6824] as an Experimental RFC, experience has been gathered by various network researchers and users about the operational issues that arise when Multipath TCP is used in today's Internet.

自从[RFC6824]作为实验性RFC发布以来,各种网络研究人员和用户已经收集了关于在当今互联网中使用多路径TCP时出现的操作问题的经验。

When the MPTCP working group was created, several use cases for Multipath TCP were identified [RFC6182]. Since then, other use cases have been proposed and some have been tested and even deployed. We describe these use cases in Section 2.

创建MPTCP工作组时,确定了多路径TCP的几个用例[RFC6182]。从那时起,已经提出了其他用例,其中一些用例已经过测试甚至部署。我们在第2节中描述了这些用例。

Section 3 focuses on the operational experience with Multipath TCP. Most of this experience comes from the utilization of the Multipath TCP implementation in the Linux kernel [MultipathTCP-Linux]. This open-source implementation has been downloaded and implemented by thousands of users all over the world. Many of these users have provided direct or indirect feedback by writing documents (scientific articles or blog messages) or posting to the mptcp-dev mailing list (see https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev). This Multipath TCP implementation is actively maintained and continuously improved. It is used on various types of hosts, ranging from smartphones or embedded routers to high-end servers.

第3节重点介绍多路径TCP的操作经验。这些经验大多来自Linux内核[MultipathTCP Linux]中多路径TCP实现的使用。这个开源实现已经被全世界成千上万的用户下载和实现。其中许多用户通过撰写文档(科学文章或博客消息)或发布到mptcp开发人员邮件列表(参见https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev). 这种多路径TCP实现得到积极维护和不断改进。它用于各种类型的主机,从智能手机或嵌入式路由器到高端服务器。

The Multipath TCP implementation in the Linux kernel is not, by far, the most widespread deployment of Multipath TCP. Since September 2013, Multipath TCP is also supported on smartphones and tablets beginning with iOS7 [IETFJ]. There are likely hundreds of millions of MPTCP-enabled devices. This Multipath TCP implementation is

到目前为止,Linux内核中的多路径TCP实现并不是多路径TCP最广泛的部署。自2013年9月起,智能手机和平板电脑上也支持多路径TCP,从iOS7开始[IETFJ]。可能有数亿个支持MPTCP的设备。这种多路径TCP实现是

currently only used to support the Siri voice recognition/control application. Some lessons learned from this deployment are described in [IETFJ].

目前仅用于支持Siri语音识别/控制应用程序。[IETFJ]中描述了从该部署中获得的一些经验教训。

Section 3 is organized as follows. Supporting the middleboxes was one of the difficult issues in designing the Multipath TCP protocol. We explain in Section 3.1 which types of middleboxes the Linux Kernel implementation of Multipath TCP supports and how it reacts upon encountering these. Section 3.2 summarizes the MPTCP-specific congestion controls that have been implemented. Sections 3.3 to 3.7 discuss heuristics and issues with respect to subflow management as well as the scheduling across the subflows. Section 3.8 explains some problems that occurred with subflows having different Maximum Segment Size (MSS) values. Section 3.9 presents issues with respect to content delivery networks and suggests a solution to this issue. Finally, Section 3.10 documents an issue with captive portals where MPTCP will behave suboptimally.

第3节的组织如下。支持中间盒是设计多路径TCP协议的难点之一。我们在第3.1节中解释了多路径TCP的Linux内核实现支持哪些类型的中间件,以及遇到这些中间件时它会如何反应。第3.2节总结了已实施的MPTCP特定拥塞控制。第3.3节至第3.7节讨论了与子流管理以及跨子流调度相关的启发和问题。第3.8节解释了具有不同最大分段大小(MSS)值的子流出现的一些问题。第3.9节介绍了有关内容交付网络的问题,并提出了解决该问题的建议。最后,第3.10节记录了一个捕获门户的问题,其中MPTCP的行为将不理想。

2. Use Cases
2. 用例

Multipath TCP has been tested in several use cases. There is already an abundant amount of scientific literature on Multipath TCP [MPTCPBIB]. Several of the papers published in the scientific literature have identified possible improvements that are worth being discussed here.

多路径TCP已经在几个用例中进行了测试。已有大量关于多路径TCP[MPTCPBIB]的科学文献。科学文献中发表的几篇论文已经确定了可能的改进,值得在此讨论。

2.1. Datacenters
2.1. 数据中心

A first, although initially unexpected, documented use case for Multipath TCP has been in datacenters [HotNets][SIGCOMM11]. Today's datacenters are designed to provide several paths between single-homed servers. The multiplicity of these paths comes from the utilization of Equal-Cost Multipath (ECMP) and other load-balancing techniques inside the datacenter. Most of the deployed load-balancing techniques in datacenters rely on hashes computed over the five tuple. Thus, all packets from the same TCP connection follow the same path: so they are not reordered. The results in [HotNets] demonstrate by simulations that Multipath TCP can achieve a better utilization of the available network by using multiple subflows for each Multipath TCP session. Although [RFC6182] assumes that at least one of the communicating hosts has several IP addresses, [HotNets] demonstrates that Multipath TCP is beneficial when both hosts are single-homed. This idea is analyzed in more details in [SIGCOMM11], where the Multipath TCP implementation in the Linux kernel is modified to be able to use several subflows from the same IP address. Measurements in a public datacenter show the quantitative benefits of Multipath TCP [SIGCOMM11] in this environment.

数据中心[HotNets][SIGCOMM11]出现了第一个多路径TCP用例,尽管最初是意外的,但有文档记录。今天的数据中心旨在提供单主机服务器之间的多条路径。这些路径的多样性来自于数据中心内部对等成本多路径(ECMP)和其他负载平衡技术的利用。数据中心中部署的大多数负载平衡技术都依赖于在五元组上计算的哈希。因此,来自同一TCP连接的所有数据包都遵循相同的路径:因此它们不会被重新排序。[HotNets]中的结果通过模拟证明,多路径TCP可以通过为每个多路径TCP会话使用多个子流来更好地利用可用网络。尽管[RFC6182]假设至少有一个通信主机具有多个IP地址,但[HotNets]证明,当两个主机都是单主机时,多路径TCP是有益的。[SIGCOMM11]对此思想进行了更详细的分析,其中修改了Linux内核中的多路径TCP实现,以便能够使用来自同一IP地址的多个子流。在公共数据中心进行的测量表明,在这种环境下,多路径TCP[SIGCOM11]在数量上具有优势。

Although ECMP is widely used inside datacenters, this is not the only environment where there are different paths between a pair of hosts. ECMP and other load-balancing techniques such as Link Aggregation Groups (LAGs) are widely used in today's networks; having multiple paths between a pair of single-homed hosts is becoming the norm instead of the exception. Although these multiple paths often have the same cost (from an IGP metrics viewpoint), they do not necessarily have the same performance. For example, [IMC13c] reports the results of a long measurement study showing that load-balanced Internet paths between that same pair of hosts can have huge delay differences.

尽管ECMP在数据中心内被广泛使用,但这并不是一对主机之间存在不同路径的唯一环境。ECMP和其他负载平衡技术,如链路聚合组(LAG)在当今的网络中得到了广泛的应用;在一对单宿主主机之间拥有多条路径正在成为常态,而不是例外。尽管这些多条路径通常具有相同的成本(从IGP度量的角度来看),但它们不一定具有相同的性能。例如,[IMC13c]报告了一项长期测量研究的结果,该研究表明,同一对主机之间的负载平衡互联网路径可能存在巨大的延迟差异。

2.2. Cellular/WiFi Offload
2.2. 蜂窝/WiFi卸载

A second use case that has been explored by several network researchers is the cellular/WiFi offload use case. Smartphones or other mobile devices equipped with two wireless interfaces are a very common use case for Multipath TCP. In September 2015, this is also the largest deployment of MPTCP-enabled devices [IETFJ]. It has been briefly discussed during IETF 88 [IETF88], but there is no published paper or report that analyses this deployment. For this reason, we only discuss published papers that have mainly used the Multipath TCP implementation in the Linux kernel for their experiments.

几位网络研究人员探讨的第二个用例是蜂窝/WiFi卸载用例。配备两个无线接口的智能手机或其他移动设备是多路径TCP的一个非常常见的用例。2015年9月,这也是最大规模的MPTCP设备部署[IETFJ]。在IETF 88[IETF88]中对其进行了简要讨论,但没有发表过分析此部署的论文或报告。因此,我们只讨论在Linux内核中主要使用多路径TCP实现进行实验的已发表论文。

The performance of Multipath TCP in wireless networks was briefly evaluated in [NSDI12]. One experiment analyzes the performance of Multipath TCP on a client with two wireless interfaces. This evaluation shows that when the receive window is large, Multipath TCP can efficiently use the two available links. However, if the window becomes smaller, then packets sent on a slow path can block the transmission of packets on a faster path. In some cases, the performance of Multipath TCP over two paths can become lower than the performance of regular TCP over the best performing path. Two heuristics, reinjection and penalization, are proposed in [NSDI12] to solve this identified performance problem. These two heuristics have since been used in the Multipath TCP implementation in the Linux kernel. [CONEXT13] explored the problem in more detail and revealed some other scenarios where Multipath TCP can have difficulties in efficiently pooling the available paths. Improvements to the Multipath TCP implementation in the Linux kernel are proposed in [CONEXT13] to cope with some of these problems.

[NSDI12]简要评估了无线网络中多径TCP的性能。一个实验分析了多路径TCP在具有两个无线接口的客户端上的性能。此评估表明,当接收窗口较大时,多路径TCP可以有效地使用两个可用链路。但是,如果窗口变小,则在慢路径上发送的数据包可能会阻止在更快路径上传输数据包。在某些情况下,多路径TCP在两条路径上的性能可能会低于常规TCP在最佳性能路径上的性能。[NSDI12]中提出了两种启发式方法,即回注法和惩罚法,以解决已识别的性能问题。这两种启发式算法后来被用于Linux内核中的多路径TCP实现。[CONEXT13]更详细地探讨了这个问题,并揭示了一些其他场景,其中多路径TCP在有效地汇集可用路径方面可能存在困难。[CONEXT13]中提出了对Linux内核中多路径TCP实现的改进,以解决其中一些问题。

The first experimental analysis of Multipath TCP in a public wireless environment was presented in [Cellnet12]. These measurements explore the ability of Multipath TCP to use two wireless networks (real WiFi and 3G networks). Three modes of operation are compared. The first mode of operation is the simultaneous use of the two wireless networks. In this mode, Multipath TCP pools the available resources

[Cellnet12]中首次对公共无线环境中的多路径TCP进行了实验分析。这些测量探索了多路径TCP使用两个无线网络(真正的WiFi和3G网络)的能力。比较了三种操作模式。第一种操作模式是同时使用两个无线网络。在此模式下,多路径TCP将可用资源集中在一起

and uses both wireless interfaces. This mode provides fast handover from WiFi to cellular or the opposite when the user moves. Measurements presented in [CACM14] show that the handover from one wireless network to another is not an abrupt process. When a host moves, there are regions where the quality of one of the wireless networks is weaker than the other, but the host considers this wireless network to still be up. When a mobile host enters such regions, its ability to send packets over another wireless network is important to ensure a smooth handover. This is clearly illustrated from the packet trace discussed in [CACM14].

并且使用两种无线接口。当用户移动时,此模式提供从WiFi到蜂窝或相反方向的快速切换。[CACM14]中的测量结果表明,从一个无线网络到另一个无线网络的切换不是一个突然的过程。当主机移动时,某些区域中的一个无线网络的质量比另一个弱,但主机认为该无线网络仍处于运行状态。当移动主机进入这些区域时,其通过另一无线网络发送数据包的能力对于确保平滑切换非常重要。[CACM14]中讨论的数据包跟踪清楚地说明了这一点。

Many cellular networks use volume-based pricing; users often prefer to use unmetered WiFi networks when available instead of metered cellular networks. [Cellnet12] implements support for the MP_PRIO option to explore two other modes of operation.

许多蜂窝网络使用基于容量的定价;用户通常更喜欢在可用时使用未计量的WiFi网络,而不是计量的蜂窝网络。[Cellnet12]实现对MP_PRIO选项的支持,以探索其他两种操作模式。

In the backup mode, Multipath TCP opens a TCP subflow over each interface, but the cellular interface is configured in backup mode. This implies that data flows only over the WiFi interface when both interfaces are considered to be active. If the WiFi interface fails, then the traffic switches quickly to the cellular interface, ensuring a smooth handover from the user's viewpoint [Cellnet12]. The cost of this approach is that the WiFi and cellular interfaces are likely to remain active all the time since all subflows are established over the two interfaces.

在备份模式下,多路径TCP在每个接口上打开一个TCP子流,但蜂窝接口在备份模式下配置。这意味着当两个接口都被认为是活动的时,数据仅通过WiFi接口流动。如果WiFi接口出现故障,则通信量将快速切换到蜂窝接口,确保从用户角度平稳切换[Cellnet12]。这种方法的成本是WiFi和蜂窝接口可能始终保持活动状态,因为所有子流都是在这两个接口上建立的。

The single-path mode is slightly different. This mode benefits from the break-before-make capability of Multipath TCP. When an MPTCP session is established, a subflow is created over the WiFi interface. No packet is sent over the cellular interface as long as the WiFi interface remains up [Cellnet12]. This implies that the cellular interface can remain idle and battery capacity is preserved. When the WiFi interface fails, a new subflow is established over the cellular interface in order to preserve the established Multipath TCP sessions. Compared to the backup mode described earlier, measurements reported in [Cellnet12] indicate that this mode of operation is characterized by a throughput drop while the cellular interface is brought up and the subflows are reestablished.

单路径模式略有不同。此模式得益于多路径TCP的先断后通功能。建立MPTCP会话时,将通过WiFi接口创建子流。只要WiFi接口保持打开状态,就不会通过蜂窝接口发送数据包[Cellnet12]。这意味着蜂窝接口可以保持空闲,电池容量保持不变。当WiFi接口出现故障时,将在蜂窝接口上建立新的子流,以保留已建立的多路径TCP会话。与前面描述的备份模式相比,[Cellnet12]中报告的测量结果表明,该操作模式的特点是,在启动蜂窝接口和重新建立子流时,吞吐量下降。

From a protocol viewpoint, [Cellnet12] discusses the problem posed by the unreliability of the REMOVE_ADDR option and proposes a small protocol extension to allow hosts to reliably exchange this option. It would be useful to analyze packet traces to understand whether the unreliability of the REMOVE_ADDR option poses an operational problem in real deployments.

从协议的角度来看,[Cellnet12]讨论了REMOVE_ADDR选项的不可靠性所带来的问题,并提出了一个小型协议扩展,以允许主机可靠地交换此选项。分析数据包跟踪有助于了解REMOVE_ADDR选项的不可靠性是否会在实际部署中造成操作问题。

Another study of the performance of Multipath TCP in wireless networks was reported in [IMC13b]. This study uses laptops connected to various cellular ISPs and WiFi hotspots. It compares various file transfer scenarios. [IMC13b] observes that 4-path MPTCP outperforms 2-path MPTCP, especially for larger files. However, for three congestion-control algorithms (LIA, OLIA, and Reno -- see Section 3.2), there is no significant performance difference for file sizes smaller than 4 MB.

[IMC13b]中报告了无线网络中多径TCP性能的另一项研究。这项研究使用笔记本电脑连接到各种手机ISP和WiFi热点。它比较了各种文件传输场景。[IMC13b]观察到4路径MPTCP优于2路径MPTCP,尤其是对于较大的文件。但是,对于三种拥塞控制算法(LIA、OLIA和Reno——请参见第3.2节),对于小于4 MB的文件大小,没有明显的性能差异。

A different study of the performance of Multipath TCP with two wireless networks is presented in [INFOCOM14]. In this study the two networks had different qualities: a good network and a lossy network. When using two paths with different packet-loss ratios, the Multipath TCP congestion-control scheme moves traffic away from the lossy link that is considered to be congested. However, [INFOCOM14] documents an interesting scenario that is summarized hereafter.

[INFOCOM14]中介绍了对两个无线网络的多路径TCP性能的不同研究。在本研究中,两种网络具有不同的性质:良好网络和有损网络。当使用具有不同丢包率的两条路径时,多径TCP拥塞控制方案将流量从被认为拥塞的有损链路移开。然而,[INFOCOM14]记录了一个有趣的场景,下面将对其进行总结。

   client ----------- path1 -------- server
     |                                  |
     +--------------- path2 ------------+
        
   client ----------- path1 -------- server
     |                                  |
     +--------------- path2 ------------+
        

Figure 1: Simple network topology

图1:简单网络拓扑

Initially, the two paths in Figure 1 have the same quality and Multipath TCP distributes the load over both of them. During the transfer, the path2 becomes lossy, e.g., because the client moves. Multipath TCP detects the packet losses and they are retransmitted over path1. This enables the data transfer to continue over this path. However, the subflow over path2 is still up and transmits one packet from time to time. Although the N packets have been acknowledged over the first subflow (at the MPTCP level), they have not been acknowledged at the TCP level over the second subflow. To preserve the continuity of the sequence numbers over the second subflow, TCP will continue to retransmit these segments until either they are acknowledged or the maximum number of retransmissions is reached. This behavior is clearly inefficient and may lead to blocking since the second subflow will consume window space to be able to retransmit these packets. [INFOCOM14] proposes a new Multipath TCP option to solve this problem. In practice, a new TCP option is probably not required. When the client detects that the data transmitted over the second subflow has been acknowledged over the first subflow, it could decide to terminate the second subflow by sending a RST segment. If the interface associated to this subflow is still up, a new subflow could be immediately reestablished. It would then be immediately usable to send new data and would not be forced to first retransmit the previously transmitted data. As of this writing, this dynamic management of the subflows is not yet implemented in the Multipath TCP implementation in the Linux kernel.

最初,图1中的两条路径具有相同的质量,多路径TCP将负载分布在这两条路径上。在传输过程中,路径2变得有损,例如,因为客户端移动。多路径TCP检测数据包丢失,并通过path1重新传输。这使数据传输能够在此路径上继续。但是,path2上的子流仍处于启动状态,并不时传输一个数据包。尽管N个数据包已在第一个子流(MPTCP级别)上得到确认,但它们尚未在第二个子流的TCP级别上得到确认。为了在第二个子流上保持序列号的连续性,TCP将继续重新传输这些段,直到它们被确认或达到最大重新传输次数。这种行为显然效率低下,并且可能导致阻塞,因为第二个子流将消耗窗口空间以便能够重新传输这些数据包。[INFOCOM14]提出了一种新的多路径TCP选项来解决此问题。实际上,可能不需要新的TCP选项。当客户端检测到通过第二子流传输的数据已经通过第一子流得到确认时,它可以通过发送RST段来决定终止第二子流。如果与此子流关联的接口仍处于打开状态,则可以立即重新建立新的子流。然后,它将立即可用以发送新数据,并且不会被迫首先重新传输先前传输的数据。在撰写本文时,这种子流的动态管理尚未在Linux内核中的多路径TCP实现中实现。

Some studies have started to analyze the performance of Multipath TCP on smartphones with real applications. In contrast with the bulk transfers that are used by many publications, many deployed applications do not exchange huge amounts of data and mainly use small connections. [COMMAG2016] proposes a software testing framework that allows to automate Android applications to study their interactions with Multipath TCP. [PAM2016] analyses a one-month packet trace of all the packets exchanged by a dozen of smartphones utilized by regular users. This analysis reveals that short connections are important on smartphones and that the main benefit of using Multipath TCP on smartphones is the ability to perform seamless handovers between different wireless networks. Long connections benefit from these handovers.

一些研究已经开始分析具有实际应用的智能手机上多路径TCP的性能。与许多出版物使用的批量传输不同,许多已部署的应用程序不交换大量数据,主要使用小型连接。[COMMAG2016]提出了一个软件测试框架,该框架允许自动化Android应用程序,以研究它们与多路径TCP的交互。[PAM2016]对普通用户使用的十几部智能手机交换的所有数据包进行一个月的数据包跟踪分析。该分析表明,短连接在智能手机上非常重要,在智能手机上使用多路径TCP的主要好处是能够在不同无线网络之间执行无缝切换。长连接从这些切换中受益。

2.3. Multipath TCP Proxies
2.3. 多路径TCP代理

As Multipath TCP is not yet widely deployed on both clients and servers, several deployments have used various forms of proxies. Two families of solutions are currently being used or tested.

由于多路径TCP尚未在客户端和服务器上广泛部署,因此有几种部署使用了各种形式的代理。目前正在使用或测试两个系列的解决方案。

A first use case is when an MPTCP-enabled client wants to use several interfaces to reach a regular TCP server. A typical use case is a smartphone that needs to use both its WiFi and its cellular interface to transfer data. Several types of proxies are possible for this use case. An HTTP proxy deployed on a MPTCP-capable server would enable the smartphone to use Multipath TCP to access regular web servers. Obviously, this solution only works for applications that rely on HTTP. Another possibility is to use a proxy that can convert any Multipath TCP connection into a regular TCP connection. MPTCP-specific proxies have been proposed [HotMiddlebox13b] [HAMPEL].

第一个用例是启用MPTCP的客户机希望使用多个接口来访问常规TCP服务器。一个典型的使用案例是智能手机,它需要使用WiFi和蜂窝接口来传输数据。对于这个用例,有几种类型的代理是可能的。部署在支持MPTCP的服务器上的HTTP代理将使智能手机能够使用多路径TCP访问常规web服务器。显然,此解决方案仅适用于依赖HTTP的应用程序。另一种可能是使用代理,该代理可以将任何多路径TCP连接转换为常规TCP连接。已经提出了MPTCP特定的代理[HotMiddlebox13b][HAMPEL]。

Another possibility leverages the SOCKS protocol [RFC1928]. SOCKS is often used in enterprise networks to allow clients to reach external servers. For this, the client opens a TCP connection to the SOCKS server that relays it to the final destination. If both the client and the SOCKS server use Multipath TCP, but not the final destination, then Multipath TCP can still be used on the path between the clients and the SOCKS server. At IETF 93, Korea Telecom announced that they have deployed (in June 2015) a commercial service that uses Multipath TCP on smartphones. These smartphones access regular TCP servers through a SOCKS proxy. This enables them to achieve throughputs of up to 850 Mbps [KT].

另一种可能性是利用SOCKS协议[RFC1928]。SOCKS通常用于企业网络,以允许客户端访问外部服务器。为此,客户机打开一个到SOCKS服务器的TCP连接,该服务器将其中继到最终目的地。如果客户端和SOCKS服务器都使用多路径TCP,但不是最终目的地,则多路径TCP仍可在客户端和SOCKS服务器之间的路径上使用。在IETF 93上,韩国电信宣布(2015年6月)在智能手机上部署了一项使用多路径TCP的商业服务。这些智能手机通过SOCKS代理访问常规TCP服务器。这使它们能够实现高达850 Mbps[KT]的吞吐量。

Measurements performed with Android smartphones [Mobicom15] show that popular applications work correctly through a SOCKS proxy and MPTCP-enabled smartphones. Thanks to Multipath TCP, long-lived connections can be spread over the two available interfaces. However, for short-lived connections, most of the data is sent over the initial subflow that is created over the interface corresponding to the default route and the second subflow is almost not used [PAM2016].

使用Android智能手机[Mobicom15]进行的测量表明,流行应用程序通过SOCKS代理和支持MPTCP的智能手机可以正常工作。多亏了多路径TCP,长寿命连接可以分布在两个可用接口上。但是,对于短命连接,大部分数据通过初始子流发送,该初始子流通过与默认路由对应的接口创建,第二个子流几乎不使用[PAM2016]。

A second use case is when Multipath TCP is used by middleboxes, typically inside access networks. Various network operators are discussing and evaluating solutions for hybrid access networks [TR-348]. Such networks arise when a network operator controls two different access network technologies, e.g., wired and cellular, and wants to combine them to improve the bandwidth offered to the end users [HYA-ARCH]. Several solutions are currently investigated for such networks [TR-348]. Figure 2 shows the organization of such a network. When a client creates a normal TCP connection, it is intercepted by the Hybrid CPE (HPCE) that converts it in a Multipath TCP connection so that it can use the available access networks (DSL and LTE in the example). The Hybrid Access Gateway (HAG) does the opposite to ensure that the regular server sees a normal TCP connection. Some of the solutions currently discussed for hybrid networks use Multipath TCP on the HCPE and the HAG. Other solutions rely on tunnels between the HCPE and the HAG [GRE-NOTIFY].

第二种使用情形是,通常在接入网络内部的中间盒使用多路径TCP。各种网络运营商正在讨论和评估混合接入网络的解决方案[TR-348]。当网络运营商控制两种不同的接入网络技术(例如有线和蜂窝)并希望将它们结合起来以改善向最终用户提供的带宽[HYA-ARCH]时,就会出现这种网络。目前针对此类网络研究了几种解决方案[TR-348]。图2显示了这样一个网络的组织。当客户端创建一个正常的TCP连接时,它会被混合CPE(HPCE)截获,该CPE将其转换为多路径TCP连接,以便它可以使用可用的接入网络(本例中为DSL和LTE)。混合访问网关(HAG)的作用正好相反,以确保常规服务器看到正常的TCP连接。目前讨论的一些混合网络解决方案在HCPE和HAG上使用多路径TCP。其他解决方案依赖于HCPE和HAG之间的隧道[GRE-NOTIFY]。

   client --- HCPE ------ DSL ------- HAG --- internet --- server
               |                       |
               +------- LTE -----------+
        
   client --- HCPE ------ DSL ------- HAG --- internet --- server
               |                       |
               +------- LTE -----------+
        

Figure 2: Hybrid Access Network

图2:混合接入网

3. Operational Experience
3. 操作经验
3.1. Middlebox Interference
3.1. 中间箱干扰

The interference caused by various types of middleboxes has been an important concern during the design of the Multipath TCP protocol. Three studies on the interactions between Multipath TCP and middleboxes are worth discussing.

在多径TCP协议的设计过程中,各种类型的中间盒引起的干扰一直是一个重要的问题。关于多路径TCP和中间盒之间相互作用的三个研究值得讨论。

The first analysis appears in [IMC11]. This paper was the main motivation for Multipath TCP incorporating various techniques to cope with middlebox interference. More specifically, Multipath TCP has been designed to cope with middleboxes that:

第一个分析出现在[IMC11]中。本文是多径TCP结合各种技术来应对中间盒干扰的主要动机。更具体地说,多路径TCP设计用于处理以下中间盒:

o change source or destination addresses

o 更改源地址或目标地址

o change source or destination port numbers

o 更改源或目标端口号

o change TCP sequence numbers

o 更改TCP序列号

o split or coalesce segments

o 分割或合并段

o remove TCP options

o 删除TCP选项

o modify the payload of TCP segments

o 修改TCP段的有效负载

These middlebox interferences have all been included in the MBtest suite [MBTest]. This test suite is used in [HotMiddlebox13] to verify the reaction of the Multipath TCP implementation in the Linux kernel [MultipathTCP-Linux] when faced with middlebox interference. The test environment used for this evaluation is a dual-homed client connected to a single-homed server. The middlebox behavior can be activated on any of the paths. The main results of this analysis are:

这些中间箱干扰都包含在MBtest套件[MBtest]中。此测试套件在[HotMiddlebox13]中用于验证Linux内核[MultipathTCP Linux]中的多路径TCP实现在遇到中间盒干扰时的反应。用于此评估的测试环境是连接到单主机服务器的双主机客户端。可以在任何路径上激活中间盒行为。该分析的主要结果如下:

o the Multipath TCP implementation in the Linux kernel is not affected by a middlebox that performs NAT or modifies TCP sequence numbers

o Linux内核中的多路径TCP实现不受执行NAT或修改TCP序列号的中间盒的影响

o when a middlebox removes the MP_CAPABLE option from the initial SYN segment, the Multipath TCP implementation in the Linux kernel falls back correctly to regular TCP

o 当一个中间盒从初始SYN段中删除支持MP_的选项时,Linux内核中的多路径TCP实现会正确地返回到常规TCP

o when a middlebox removes the DSS option from all data segments, the Multipath TCP implementation in the Linux kernel falls back correctly to regular TCP

o 当中间盒从所有数据段中删除DSS选项时,Linux内核中的多路径TCP实现会正确地返回到常规TCP

o when a middlebox performs segment coalescing, the Multipath TCP implementation in the Linux kernel is still able to accurately extract the data corresponding to the indicated mapping

o 当中间盒执行段合并时,Linux内核中的多路径TCP实现仍然能够准确地提取与指定映射对应的数据

o when a middlebox performs segment splitting, the Multipath TCP implementation in the Linux kernel correctly reassembles the data corresponding to the indicated mapping. [HotMiddlebox13] shows, in Figure 4 in Section 3.3, a corner case with segment splitting that may lead to a desynchronization between the two hosts.

o 当中间盒执行段拆分时,Linux内核中的多路径TCP实现会正确地重新组装与指定映射对应的数据。[HotMiddlebox13]在第3.3节的图4中显示了一种可能导致两台主机之间失步的段分裂的情况。

The interactions between Multipath TCP and real deployed middleboxes are also analyzed in [HotMiddlebox13]; a particular scenario with the FTP Application Level Gateway running on a NAT is described.

[HotMiddlebox13]中还分析了多路径TCP和实际部署的中间盒之间的交互;描述了在NAT上运行FTP应用程序级网关的特定场景。

Middlebox interference can also be detected by analyzing packet traces on MPTCP-enabled servers. A closer look at the packets received on the multipath-tcp.org server [TMA2015] shows that among the 184,000 Multipath TCP connections, only 125 of them were falling back to regular TCP. These connections originated from 28 different client IP addresses. These include 91 HTTP connections and 34 FTP connections. The FTP interference is expected since Application Level Gateways used for FTP modify the TCP payload and the DSS Checksum detects these modifications. The HTTP interference appeared only on the direction from server to client and could have been caused by transparent proxies deployed in cellular or enterprise networks. A longer trace is discussed in [COMCOM2016] and similar conclusions about the middlebox interference are provided.

还可以通过分析启用MPTCP的服务器上的数据包跟踪来检测中间盒干扰。仔细查看在multipath-tcp.org服务器[TMA2015]上接收到的数据包可以发现,在184000个多路径tcp连接中,只有125个返回到常规tcp。这些连接来自28个不同的客户端IP地址。其中包括91个HTTP连接和34个FTP连接。由于用于FTP的应用程序级网关修改TCP有效负载,且DSS校验和检测到这些修改,因此预计会出现FTP干扰。HTTP干扰仅出现在从服务器到客户端的方向上,可能是由部署在蜂窝或企业网络中的透明代理造成的。[COM2016]中讨论了更长的跟踪,并提供了关于中间盒干扰的类似结论。

From an operational viewpoint, knowing that Multipath TCP can cope with various types of middlebox interference is important. However, there are situations where the network operators need to gather information about where a particular middlebox interference occurs. The tracebox software [tracebox] described in [IMC13a] is an extension of the popular traceroute software that enables network operators to check at which hop a particular field of the TCP header (including options) is modified. It has been used by several network operators to debug various middlebox interference problems. Experience with tracebox indicates that supporting the ICMP extension defined in [RFC1812] makes it easier to debug middlebox problems in IPv4 networks.

从操作的角度来看,知道多路径TCP可以处理各种类型的中间箱干扰是很重要的。然而,在某些情况下,网络运营商需要收集有关特定中间箱干扰发生位置的信息。[IMC13a]中所述的tracebox软件[tracebox]是流行的traceroute软件的扩展,它使网络运营商能够检查TCP报头(包括选项)的特定字段在哪个跃点被修改。一些网络运营商已经使用它来调试各种中间盒干扰问题。tracebox的经验表明,支持[RFC1812]中定义的ICMP扩展可以更容易地调试IPv4网络中的中间盒问题。

Users of the Multipath TCP implementation have reported some experience with middlebox interference. The strangest scenario has been a middlebox that accepts the Multipath TCP options in the SYN segment but later replaces Multipath TCP options with a TCP EOL option [StrangeMbox]. This causes Multipath TCP to perform a fallback to regular TCP without any impact on the application.

多路径TCP实现的用户报告了一些关于中间盒干扰的经验。最奇怪的场景是一个中间盒,它接受SYN段中的多路径TCP选项,但后来用TCP EOL选项[SquendMBOX]替换多路径TCP选项。这会导致多路径TCP在不影响应用程序的情况下执行常规TCP的回退。

3.2. Congestion Control
3.2. 拥塞控制

Congestion control has been an important challenge for Multipath TCP. The coupled congestion-control scheme defined in [RFC6356] in an adaptation of the NewReno algorithm. A detailed description of this coupled algorithm is provided in [NSDI11]. It is the default scheme in the Linux implementation of Multipath TCP, but Linux supports other schemes.

拥塞控制一直是多径TCP面临的一个重要挑战。[RFC6356]中定义的耦合拥塞控制方案适用于NewReno算法。[NSDI11]中提供了该耦合算法的详细描述。这是多路径TCP的Linux实现中的默认方案,但Linux支持其他方案。

The second congestion-control scheme is OLIA [CONEXT12]. It is also an adaptation of the NewReno single path congestion-control scheme to support multiple paths. Simulations [CONEXT12] and measurements [CONEXT13] have shown that it provides some performance benefits compared to the default coupled congestion-control scheme.

第二种拥塞控制方案是OLIA[CONEXT12]。它也是NewReno单路径拥塞控制方案的一种改进,以支持多条路径。仿真[CONEXT12]和测量[CONEXT13]表明,与默认的耦合拥塞控制方案相比,它提供了一些性能优势。

The delay-based scheme proposed in [ICNP12] has also been ported to the Multipath TCP implementation in the Linux kernel. It has been evaluated by using simulations [ICNP12] and measurements [PaaschPhD].

[ICNP12]中提出的基于延迟的方案也被移植到Linux内核中的多路径TCP实现中。通过使用模拟[ICNP12]和测量[PaaschPhD]对其进行了评估。

BALIA, defined in [BALIA], provides a better balance between TCP friendliness, responsiveness, and window oscillation.

[BALIA]中定义的BALIA在TCP友好性、响应性和窗口振荡之间提供了更好的平衡。

These different congestion-control schemes have been compared in several articles. [CONEXT13] and [PaaschPhD] compare these algorithms in an emulated environment. The evaluation showed that the delay-based congestion-control scheme is less able to efficiently use the available links than the three other schemes.

几篇文章对这些不同的拥塞控制方案进行了比较。[CONEXT13]和[PaaschPhD]在模拟环境中比较这些算法。评估表明,基于延迟的拥塞控制方案比其他三种方案更不能有效地利用可用链路。

3.3. Subflow Management
3.3. 亚流管理

The multipath capability of Multipath TCP comes from the utilization of one subflow per path. The Multipath TCP architecture [RFC6182] and the protocol specification [RFC6824] define the basic usage of the subflows and the protocol mechanisms that are required to create and terminate them. However, there are no guidelines on how subflows are used during the lifetime of a Multipath TCP session. Most of the published experiments with Multipath TCP have been performed in controlled environments. Still, based on the experience running them and discussions on the mptcp-dev mailing list, interesting lessons have been learned about the management of these subflows.

多路径TCP的多路径能力来自于每条路径使用一个子流。多路径TCP体系结构[RFC6182]和协议规范[RFC6824]定义了子流的基本用法以及创建和终止子流所需的协议机制。但是,在多路径TCP会话的生存期内,没有关于如何使用子流的指导原则。大多数已发表的多路径TCP实验都是在受控环境中进行的。尽管如此,基于运行它们的经验和对mptcp开发人员邮件列表的讨论,我们已经学到了关于这些子流管理的有趣经验。

From a subflow viewpoint, the Multipath TCP protocol is completely symmetrical. Both the clients and the server have the capability to create subflows. However, in practice, the existing Multipath TCP implementations have opted for a strategy where only the client creates new subflows. The main motivation for this strategy is that often the client resides behind a NAT or a firewall, preventing passive subflow openings on the client. Although there are environments such as datacenters where this problem does not occur, as of this writing, no precise requirement has emerged for allowing the server to create new subflows.

从子流的角度来看,多路径TCP协议是完全对称的。客户端和服务器都可以创建子流。然而,在实践中,现有的多路径TCP实现选择了一种只有客户端创建新子流的策略。此策略的主要动机是,客户机通常位于NAT或防火墙后面,以防止在客户机上打开被动子流。尽管有些环境(如数据中心)不会出现此问题,但在撰写本文时,还没有明确要求允许服务器创建新的子流。

3.4. Implemented Subflow Managers
3.4. 实现的子流管理器

The Multipath TCP implementation in the Linux kernel includes several strategies to manage the subflows that compose a Multipath TCP session. The basic subflow manager is the full-mesh. As the name implies, it creates a full-mesh of subflows between the communicating hosts.

Linux内核中的多路径TCP实现包括几种策略,用于管理组成多路径TCP会话的子流。基本子流管理器是完整网格。顾名思义,它在通信主机之间创建了一个完整的子流网格。

The most frequent use case for this subflow manager is a multihomed client connected to a single-homed server. In this case, one subflow is created for each interface on the client. The current implementation of the full-mesh subflow manager is static. The subflows are created immediately after the creation of the initial subflow. If one subflow fails during the lifetime of the Multipath TCP session (e.g., due to excessive retransmissions or the loss of the corresponding interface), it is not always reestablished. There is ongoing work to enhance the full-mesh path manager to deal with such events.

此子流管理器最常见的使用情形是连接到单个主服务器的多主客户端。在这种情况下,将为客户端上的每个接口创建一个子流。全网格子流管理器的当前实现是静态的。子流在创建初始子流后立即创建。如果一个子流在多路径TCP会话的生存期内失败(例如,由于过度重传或相应接口的丢失),则并不总是重新建立该子流。目前正在开展工作,以增强完整网格路径管理器来处理此类事件。

When the server is multihomed, using the full-mesh subflow manager may lead to a large number of subflows being established. For example, consider a dual-homed client connected to a server with three interfaces. In this case, even if the subflows are only created by the client, six subflows will be established. This may be excessive in some environments, in particular when the client and/or the server have a large number of interfaces. Implementations should limit the number of subflows that are used.

当服务器为多址服务器时,使用全网状子流管理器可能会导致建立大量子流。例如,考虑连接到具有三个接口的服务器的双归属客户端。在这种情况下,即使子流仅由客户端创建,也将建立六个子流。在某些环境中,尤其是当客户端和/或服务器具有大量接口时,这可能会过多。实现应该限制使用的子流的数量。

Creating subflows between multihomed clients and servers may sometimes lead to operational issues as observed by discussions on the mptcp-dev mailing list. In some cases, the network operators would like to have a better control on how the subflows are created by Multipath TCP [MPTCP-MAX-SUB]. This might require the definition of policy rules to control the operation of the subflow manager. The two scenarios below illustrate some of these requirements.

在多主机客户端和服务器之间创建子流有时可能会导致操作问题,正如mptcp开发人员邮件列表上的讨论所观察到的那样。在某些情况下,网络运营商希望能够更好地控制多路径TCP[MPTCP-MAX-SUB]如何创建子流。这可能需要定义策略规则来控制子流管理器的操作。下面的两个场景说明了其中的一些需求。

                host1 ----------  switch1 ----- host2
                  |                   |            |
                  +--------------  switch2 --------+
        
                host1 ----------  switch1 ----- host2
                  |                   |            |
                  +--------------  switch2 --------+
        

Figure 3: Simple Switched Network Topology

图3:简单交换网络拓扑

Consider the simple network topology shown in Figure 3. From an operational viewpoint, a network operator could want to create two subflows between the communicating hosts. From a bandwidth utilization viewpoint, the most natural paths are host1-switch1-host2 and host1-switch2-host2. However, a Multipath TCP implementation running on these two hosts may sometimes have difficulties to obtain this result.

考虑图3所示的简单网络拓扑结构。从操作角度来看,网络运营商可能希望在通信主机之间创建两个子流。从带宽利用率的角度来看,最自然的路径是host1-switch1-host2和host1-switch2-host2。但是,在这两台主机上运行的多路径TCP实现有时可能难以获得此结果。

To understand the difficulty, let us consider different allocation strategies for the IP addresses. A first strategy is to assign two subnets: subnetA (resp. subnetB) contains the IP addresses of host1's interface to switch1 (resp. switch2) and host2's interface to switch1 (resp. switch2). In this case, a Multipath TCP subflow manager should only create one subflow per subnet. To enforce the utilization of these paths, the network operator would have to specify a policy that prefers the subflows in the same subnet over subflows between addresses in different subnets. It should be noted that the policy should probably also specify how the subflow manager should react when an interface or subflow fails.

为了理解困难,让我们考虑不同的IP地址分配策略。第一种策略是分配两个子网:subnetA(resp.subnetB)包含主机1接口到交换机1(resp.switch2)的IP地址,以及主机2接口到交换机1(resp.switch2)的IP地址。在这种情况下,多路径TCP子流管理器应仅为每个子网创建一个子流。为了强制使用这些路径,网络运营商必须指定一个策略,使同一子网中的子流优先于不同子网地址之间的子流。应该注意的是,策略可能还应该指定当接口或子流出现故障时子流管理器应该如何反应。

A second strategy is to use a single subnet for all IP addresses. In this case, it becomes more difficult to specify a policy that indicates which subflows should be established.

第二种策略是对所有IP地址使用一个子网。在这种情况下,指定指示应建立哪些子流的策略变得更加困难。

The second subflow manager that is currently supported by the Multipath TCP implementation in the Linux kernel is the ndiffport subflow manager. This manager was initially created to exploit the path diversity that exists between single-homed hosts due to the utilization of flow-based load-balancing techniques [SIGCOMM11]. This subflow manager creates N subflows between the same pair of IP addresses. The N subflows are created by the client and differ only in the source port selected by the client. It was not designed to be used on multihomed hosts.

Linux内核中的多路径TCP实现当前支持的第二个子流管理器是NDIFPORT子流管理器。创建此管理器最初是为了利用由于使用基于流的负载平衡技术而在单主机之间存在的路径多样性[SIGCOMM11]。此子流管理器在同一对IP地址之间创建N个子流。N个子流由客户端创建,仅在客户端选择的源端口中不同。它不是为在多主机上使用而设计的。

A more flexible subflow manager has been proposed, implemented and evaluated in [CONEXT15]. This subflow manager exposes various kernel events to a user space daemon that decides when subflows need to be created and terminated based on various policies.

[CONEXT15]中提出、实施并评估了一种更灵活的子流管理器。此子流管理器向用户空间守护进程公开各种内核事件,该守护进程根据各种策略决定何时需要创建和终止子流。

3.5. Subflow Destination Port
3.5. 子流目的端口

The Multipath TCP protocol relies on the token contained in the MP_JOIN option to associate a subflow to an existing Multipath TCP session. This implies that there is no restriction on the source address, destination address and source or destination ports used for the new subflow. The ability to use different source and destination addresses is key to support multihomed servers and clients. The ability to use different destination port numbers is worth discussing because it has operational implications.

多路径TCP协议依赖于MP_JOIN选项中包含的令牌将子流与现有多路径TCP会话相关联。这意味着对用于新子流的源地址、目标地址和源或目标端口没有限制。能够使用不同的源地址和目标地址是支持多主机服务器和客户端的关键。使用不同目的地端口号的能力值得讨论,因为它具有操作意义。

For illustration, consider a dual-homed client that creates a second subflow to reach a single-homed server as illustrated in Figure 4.

为了说明,考虑一个双宿主客户端,它创建一个第二子流,以达到一个单宿主服务器,如图4所示。

           client ------- r1 --- internet --- server
               |                   |
               +----------r2-------+
        
           client ------- r1 --- internet --- server
               |                   |
               +----------r2-------+
        

Figure 4: Multihomed-Client Connected to Single-Homed Server

图4:连接到单主机服务器的多主机客户端

When the Multipath TCP implementation in the Linux kernel creates the second subflow, it uses the same destination port as the initial subflow. This choice is motivated by the fact that the server might be protected by a firewall and only accept TCP connections (including subflows) on the official port number. Using the same destination port for all subflows is also useful for operators that rely on the port numbers to track application usage in their network.

当Linux内核中的多路径TCP实现创建第二个子流时,它使用与初始子流相同的目标端口。这种选择的动机是服务器可能受到防火墙的保护,并且只接受官方端口号上的TCP连接(包括子流)。对于依赖端口号跟踪网络中应用程序使用情况的运营商来说,为所有子流使用相同的目标端口也很有用。

There have been suggestions from Multipath TCP users to modify the implementation to allow the client to use different destination ports to reach the server. This suggestion seems mainly motivated by traffic-shaping middleboxes that are used in some wireless networks. In networks where different shaping rates are associated with different destination port numbers, this could allow Multipath TCP to reach a higher performance. This behavior is valid according to the Multipath TCP specification [RFC6824]. An application could use an enhanced socket API [SOCKET] to behave in this way.

多路径TCP用户建议修改实现,以允许客户端使用不同的目标端口到达服务器。这一建议似乎主要是由某些无线网络中使用的流量整形中间盒所推动的。在网络中,不同的成形速率与不同的目标端口号相关联,这可以使多路径TCP达到更高的性能。根据多路径TCP规范[RFC6824],此行为有效。应用程序可以使用增强的套接字API[socket]以这种方式运行。

However, from an implementation point-of-view supporting different destination ports for the same Multipath TCP connection can cause some issues. A legacy implementation of a TCP stack creates a listening socket to react upon incoming SYN segments. The listening socket is handling the SYN segments that are sent on a specific port number. Demultiplexing incoming segments can thus be done solely by looking at the IP addresses and the port numbers. With Multipath TCP however, incoming SYN segments may have an MP_JOIN option with a different destination port. This means that all incoming segments

但是,从实现的角度来看,为同一多路径TCP连接支持不同的目标端口可能会导致一些问题。TCP堆栈的传统实现创建一个侦听套接字,以对传入的SYN段作出反应。侦听套接字正在处理在特定端口号上发送的SYN段。因此,只需查看IP地址和端口号即可对传入段进行解复用。但是,对于多路径TCP,传入的SYN段可能具有具有不同目标端口的MP_连接选项。这意味着所有传入段

that did not match on an existing listening-socket or an already established socket must be parsed for an eventual MP_JOIN option. This imposes an additional cost on servers, previously not existent on legacy TCP implementations.

必须分析现有侦听套接字或已建立的套接字上不匹配的,以获得最终的MP_连接选项。这给服务器带来了额外的成本,而以前传统TCP实现中不存在这种成本。

3.6. Closing Subflows
3.6. 关闭子流
                    client                       server
                       |                           |
   MPTCP: ESTABLISHED  |                           | MPTCP: ESTABLISHED
   Sub: ESTABLISHED    |                           | Sub: ESTABLISHED
                       |                           |
                       |         DATA_FIN          |
   MPTCP: CLOSE-WAIT   | <------------------------ | close()   (step 1)
   Sub: ESTABLISHED    |         DATA_ACK          |
                       | ------------------------> | MPTCP: FIN-WAIT-2
                       |                           | Sub: ESTABLISHED
                       |                           |
                       |  DATA_FIN + subflow-FIN   |
   close()/shutdown()  | ------------------------> | MPTCP: TIME-WAIT
   (step 2)            |        DATA_ACK           | Sub: CLOSE-WAIT
   MPTCP: CLOSED       | <------------------------ |
   Sub: FIN-WAIT-2     |                           |
                       |                           |
                       |        subflow-FIN        |
   MPTCP: CLOSED       | <------------------------ | subflow-close()
   Sub: TIME-WAIT      |        subflow-ACK        |
   (step 3)            | ------------------------> | MPTCP: TIME-WAIT
                       |                           | Sub: CLOSED
                       |                           |
        
                    client                       server
                       |                           |
   MPTCP: ESTABLISHED  |                           | MPTCP: ESTABLISHED
   Sub: ESTABLISHED    |                           | Sub: ESTABLISHED
                       |                           |
                       |         DATA_FIN          |
   MPTCP: CLOSE-WAIT   | <------------------------ | close()   (step 1)
   Sub: ESTABLISHED    |         DATA_ACK          |
                       | ------------------------> | MPTCP: FIN-WAIT-2
                       |                           | Sub: ESTABLISHED
                       |                           |
                       |  DATA_FIN + subflow-FIN   |
   close()/shutdown()  | ------------------------> | MPTCP: TIME-WAIT
   (step 2)            |        DATA_ACK           | Sub: CLOSE-WAIT
   MPTCP: CLOSED       | <------------------------ |
   Sub: FIN-WAIT-2     |                           |
                       |                           |
                       |        subflow-FIN        |
   MPTCP: CLOSED       | <------------------------ | subflow-close()
   Sub: TIME-WAIT      |        subflow-ACK        |
   (step 3)            | ------------------------> | MPTCP: TIME-WAIT
                       |                           | Sub: CLOSED
                       |                           |
        

Figure 5: Multipath TCP may not be able to avoid time-wait state on the subflow (indicated as Sub in the drawing), even if enforced by the application on the client-side.

图5:多路径TCP可能无法避免子流上的时间等待状态(在图形中指示为Sub),即使是由客户端的应用程序强制执行。

Figure 5 shows a very particular issue within Multipath TCP. Many high-performance applications try to avoid TIME-WAIT state by deferring the closure of the connection until the peer has sent a FIN. That way, the client on the left of Figure 5 does a passive closure of the connection, transitioning from CLOSE-WAIT to Last-ACK and finally freeing the resources after reception of the ACK of the FIN. An application running on top of an MPTCP-enabled Linux kernel might also use this approach. The difference here is that the close() of the connection (step 1 in Figure 5) only triggers the

图5显示了多路径TCP中的一个非常特殊的问题。许多高性能应用程序试图通过延迟连接的关闭直到对等方发送FIN来避免时间等待状态。这样,图5左侧的客户端将被动关闭连接,从CLOSE-WAIT转换到Last ACK,并在接收到FIN的ACK后释放资源。在支持MPTCP的Linux内核上运行的应用程序也可能使用这种方法。这里的区别是连接的close()只触发

sending of a DATA_FIN. Nothing guarantees that the kernel is ready to combine the DATA_FIN with a subflow-FIN. The reception of the DATA_FIN will make the application trigger the closure of the connection (step 2), trying to avoid TIME-WAIT state with this late closure. This time, the kernel might decide to combine the DATA_FIN with a subflow-FIN. This decision will be fatal, as the subflow's state machine will not transition from CLOSE_WAIT to Last-ACK, but rather go through FIN_WAIT-2 into TIME-WAIT state. The TIME-WAIT state will consume resources on the host for at least 2 MSL (Maximum Segment Lifetime). Thus, a smart application that tries to avoid TIME-WAIT state by doing late closure of the connection actually ends up with one of its subflows in TIME-WAIT state. A high-performance Multipath TCP kernel implementation should honor the desire of the application to do passive closure of the connection and successfully avoid TIME-WAIT state -- even on the subflows.

发送数据。没有任何东西可以保证内核准备好将数据_FIN与子流FIN组合在一起。接收数据_FIN将使应用程序触发连接关闭(步骤2),试图避免延迟关闭的等待状态。这一次,内核可能决定将数据_FIN与子流FIN组合起来。此决定将是致命的,因为子流的状态机不会从CLOSE_WAIT转换到Last ACK,而是通过FIN_WAIT-2进入TIME-WAIT状态。等待时间状态将消耗主机上的资源至少2 MSL(最大段生存期)。因此,试图通过延迟关闭连接来避免时间等待状态的智能应用程序实际上会以其一个子流处于时间等待状态而告终。一个高性能的多路径TCP内核实现应该满足应用程序被动关闭连接的愿望,并成功避免时间等待状态——即使在子流上也是如此。

The solution to this problem lies in an optimistic assumption that a host doing active-closure of a Multipath TCP connection by sending a DATA_FIN will soon also send a FIN on all its subflows. Thus, the passive closer of the connection can simply wait for the peer to send exactly this FIN -- enforcing passive closure even on the subflows. Of course, to avoid consuming resources indefinitely, a timer must limit the time our implementation waits for the FIN.

这个问题的解决方案在于一个乐观的假设,即通过发送数据FIN主动关闭多路径TCP连接的主机很快也会在其所有子流上发送FIN。因此,连接的被动闭合器可以简单地等待对等方发送这个FIN——甚至在子流上强制被动闭合。当然,为了避免无限期地消耗资源,计时器必须限制实现等待FIN的时间。

3.7. Packet Schedulers
3.7. 数据包调度器

In a Multipath TCP implementation, the packet scheduler is the algorithm that is executed when transmitting each packet to decide on which subflow it needs to be transmitted. The packet scheduler itself does not have any impact on the interoperability of Multipath TCP implementations. However, it may clearly impact the performance of Multipath TCP sessions. The Multipath TCP implementation in the Linux kernel supports a pluggable architecture for the packet scheduler [PaaschPhD]. As of this writing, two schedulers have been implemented: round-robin and lowest-rtt-first. The second scheduler relies on the round-trip time (rtt) measured on each TCP subflow and sends first segments over the subflow having the lowest round-trip time. They are compared in [CSWS14]. The experiments and measurements described in [CSWS14] show that the lowest-rtt-first scheduler appears to be the best compromise from a performance viewpoint. Another study of the packet schedulers is presented in [PAMS2014]. This study relies on simulations with the Multipath TCP implementation in the Linux kernel. They compare the lowest-rtt-first with the round-robin and a random scheduler. They show some situations where the lowest-rtt-first scheduler does not perform as well as the other schedulers, but there are many scenarios where the

在多路径TCP实现中,数据包调度器是在传输每个数据包时执行的算法,用于决定需要传输哪个子流。数据包调度器本身对多路径TCP实现的互操作性没有任何影响。但是,它可能会明显影响多路径TCP会话的性能。Linux内核中的多路径TCP实现支持数据包调度器[PaaschPhD]的可插拔体系结构。在撰写本文时,已经实现了两个调度器:循环调度和最低rtt优先。第二个调度器依赖于在每个TCP子流上测量的往返时间(rtt),并通过具有最低往返时间的子流发送第一段。它们在[CSWS14]中进行了比较。[CSWS14]中描述的实验和测量表明,从性能角度来看,最低的rtt优先调度程序似乎是最好的折衷方案。关于数据包调度器的另一项研究见[PAMS2014]。本研究依赖于Linux内核中多路径TCP实现的模拟。他们首先将最低的rtt与循环调度和随机调度进行比较。它们显示了一些情况,其中最低的rtt first调度程序的性能不如其他调度程序,但在许多情况下

opposite is true. [PAMS2014] notes that "it is highly likely that the optimal scheduling strategy depends on the characteristics of the paths being used."

反之亦然。[PAMS2014]指出,“优化调度策略很可能取决于所用路径的特征。”

3.8. Segment Size Selection
3.8. 段大小选择

When an application performs a write/send system call, the kernel allocates a packet buffer (sk_buff in Linux) to store the data the application wants to send. The kernel will store at most one MSS (Maximum Segment Size) of data per buffer. As the MSS can differ amongst subflows, an MPTCP implementation must select carefully the MSS used to generate application data. The Linux kernel implementation had various ways of selecting the MSS: minimum or maximum amongst the different subflows. However, these heuristics of MSS selection can cause significant performance issues in some environments. Consider the following example. An MPTCP connection has two established subflows that respectively use an MSS of 1420 and 1428 bytes. If MPTCP selects the maximum, then the application will generate segments of 1428 bytes of data. An MPTCP implementation will have to split the segment in two (1420-byte and 8-byte) segments when pushing on the subflow with the smallest MSS. The latter segment will introduce a large overhead as this single data segment will use 2 slots in the congestion window (in packets) therefore reducing by roughly twice the potential throughput (in bytes/s) of this subflow. Taking the smallest MSS does not solve the issue as there might be a case where the subflow with the smallest MSS only sends a few packets, therefore reducing the potential throughput of the other subflows.

当应用程序执行写/发送系统调用时,内核分配一个数据包缓冲区(Linux中的sk_buff)来存储应用程序想要发送的数据。内核将在每个缓冲区中最多存储一个MSS(最大段大小)的数据。由于MSS在子流之间可能不同,MPTCP实现必须仔细选择用于生成应用程序数据的MSS。Linux内核实现有多种选择MSS的方法:在不同的子流中选择最小值或最大值。然而,这些MSS选择的启发式方法在某些环境中可能会导致严重的性能问题。考虑下面的例子。MPTCP连接有两个已建立的子流,分别使用1420和1428字节的MSS。如果MPTCP选择最大值,则应用程序将生成1428字节的数据段。当使用最小的MSS推动子流时,MPTCP实现必须将该段分成两个(1420字节和8字节)段。后一段将引入较大的开销,因为此单个数据段将在拥塞窗口中使用2个插槽(以数据包为单位),因此将此子流的潜在吞吐量(以字节/秒为单位)减少大约两倍。采用最小的ms并不能解决问题,因为可能存在这样的情况,即具有最小ms的子流只发送几个分组,因此降低了其他子流的潜在吞吐量。

The Linux implementation recently took another approach [DetalMSS]. Instead of selecting the minimum and maximum values, it now dynamically adapts the MSS based on the contribution of all the subflows to the connection's throughput. For each subflow, it computes the potential throughput achieved by selecting each MSS value and by taking into account the lost space in the congestion window. It then selects the MSS that allows to achieve the highest potential throughput.

Linux实现最近采用了另一种方法[DetalMSS]。现在,它不再选择最小值和最大值,而是根据所有子流对连接吞吐量的贡献动态调整MSS。对于每个子流,它通过选择每个MSS值并考虑拥塞窗口中的丢失空间来计算潜在吞吐量。然后,它选择允许实现最高潜在吞吐量的MSS。

Given the prevalence of middleboxes that clamp the MSS, Multipath TCP implementations must be able to efficiently support subflows with different MSS values. The strategy described above is a possible solution to this problem.

鉴于夹持MSS的中间盒的流行,多路径TCP实现必须能够有效地支持具有不同MSS值的子流。上述策略是解决此问题的可能方法。

3.9. Interactions with the Domain Name System
3.9. 与域名系统的互动

Multihomed clients such as smartphones can send DNS queries over any of their interfaces. When a single-homed client performs a DNS query, it receives from its local resolver the best answer for its request. If the client is multihomed, the answer in response to the DNS query may vary with the interface over which it has been sent.

智能手机等多宿客户端可以通过其任何接口发送DNS查询。当单个托管客户端执行DNS查询时,它会从其本地解析程序收到其请求的最佳答案。如果客户端是多址的,则响应DNS查询的答案可能因发送该查询的接口而异。

                      cdn1
                       |
           client -- cellular -- internet -- cdn3
              |                   |
              +----- wifi --------+
                       |
                     cdn2
        
                      cdn1
                       |
           client -- cellular -- internet -- cdn3
              |                   |
              +----- wifi --------+
                       |
                     cdn2
        

Figure 6: Simple Network Topology

图6:简单网络拓扑

If the client sends a DNS query over the WiFi interface, the answer will point to the cdn2 server while the same request sent over the cellular interface will point to the cdn1 server. This might cause problems for CDN providers that locate their servers inside ISP networks and have contracts that specify that the CDN server will only be accessed from within this particular ISP. Assume now that both the client and the CDN servers support Multipath TCP. In this case, a Multipath TCP session from cdn1 or cdn2 would potentially use both the cellular network and the WiFi network. Serving the client from cdn2 over the cellular interface could violate the contract between the CDN provider and the network operators. A similar problem occurs with regular TCP if the client caches DNS replies. For example, the client obtains a DNS answer over the cellular interface and then stops this interface and starts to use its WiFi interface. If the client retrieves data from cdn1 over its WiFi interface, this may also violate the contract between the CDN and the network operators.

如果客户端通过WiFi接口发送DNS查询,则答案将指向cdn2服务器,而通过蜂窝接口发送的相同请求将指向cdn1服务器。这可能会给CDN提供商带来问题,这些提供商将其服务器定位在ISP网络内,并且合同规定只能从该特定ISP内访问CDN服务器。现在假设客户端和CDN服务器都支持多路径TCP。在这种情况下,来自cdn1或cdn2的多路径TCP会话可能同时使用蜂窝网络和WiFi网络。通过蜂窝接口从cdn2向客户端提供服务可能违反CDN提供商和网络运营商之间的合同。如果客户端缓存DNS回复,则常规TCP也会出现类似问题。例如,客户端通过蜂窝接口获得DNS应答,然后停止此接口并开始使用其WiFi接口。如果客户端通过其WiFi接口从cdn1检索数据,这也可能违反CDN与网络运营商之间的合同。

A possible solution to prevent this problem would be to modify the DNS resolution on the client. The client subnet Extension Mechanisms for DNS (EDNS) defined in [RFC7871] could be used for this purpose. When the client sends a DNS query from its WiFi interface, it should also send the client subnet corresponding to the cellular interface in this request. This would indicate to the resolver that the answer should be valid for both the WiFi and the cellular interfaces (e.g., the cdn3 server).

防止此问题的可能解决方案是修改客户端上的DNS解析。[RFC7871]中定义的DNS(EDN)客户端子网扩展机制可用于此目的。当客户端从其WiFi接口发送DNS查询时,它还应发送与此请求中的蜂窝接口对应的客户端子网。这将向解析程序表明,答案对于WiFi和蜂窝接口(例如,cdn3服务器)都应有效。

3.10. Captive Portals
3.10. 俘虏门户

Multipath TCP enables a host to use different interfaces to reach a server. In theory, this should ensure connectivity when at least one of the interfaces is active. However, in practice, there are some particular scenarios with captive portals that may cause operational problems. The reference environment is shown in Figure 7.

多路径TCP允许主机使用不同的接口访问服务器。理论上,当至少一个接口处于活动状态时,这应确保连接。然而,在实践中,有一些特定的情况下,捕获门户可能会导致操作问题。参考环境如图7所示。

           client -----  network1
                |
                +------- internet ------------- server
        
           client -----  network1
                |
                +------- internet ------------- server
        

Figure 7: Issue with Captive Portal

图7:捕获门户的问题

The client is attached to two networks: network1 that provides limited connectivity and the entire Internet through the second network interface. In practice, this scenario corresponds to an open WiFi network with a captive portal for network1 and a cellular service for the second interface. On many smartphones, the WiFi interface is preferred over the cellular interface. If the smartphone learns a default route via both interfaces, it will typically prefer to use the WiFi interface to send its DNS request and create the first subflow. This is not optimal with Multipath TCP. A better approach would probably be to try a few attempts on the WiFi interface and then, upon failure of these attempts, try to use the second interface for the initial subflow as well.

客户端连接到两个网络:提供有限连接的network1和通过第二个网络接口连接整个Internet。在实践中,该场景对应于一个开放的WiFi网络,该网络具有一个用于network1的捕获门户和一个用于第二个接口的蜂窝服务。在许多智能手机上,WiFi接口优于蜂窝接口。如果智能手机通过这两个接口学习到默认路由,它通常更愿意使用WiFi接口发送其DNS请求并创建第一个子流。这不是多路径TCP的最佳选择。更好的方法可能是尝试在WiFi接口上进行几次尝试,然后在这些尝试失败后,尝试将第二个接口也用于初始子流。

3.11. Stateless Webservers
3.11. 无状态Web服务器

MPTCP has been designed to interoperate with webservers that benefit from SYN-cookies to protect against SYN-flooding attacks [RFC4987]. MPTCP achieves this by echoing the keys negotiated during the MP_CAPABLE handshake in the third ACK of the three-way handshake. Reception of this third ACK then allows the server to reconstruct the state specific to MPTCP.

MPTCP设计用于与受益于SYN Cookie的Web服务器进行互操作,以防止SYN洪泛攻击[RFC4987]。MPTCP通过在三方握手的第三个ACK中回显支持MP_的握手过程中协商的密钥来实现这一点。然后,接收到第三个ACK允许服务器重构特定于MPTCP的状态。

However, one caveat to this mechanism is the unreliable nature of the third ACK. Indeed, when the third ACK gets lost, the server will not be able to reconstruct the MPTCP state. MPTCP will fall back to regular TCP in this case. This is in contrast to regular TCP. When the client starts sending data, the first data segment also includes the SYN-cookie, which allows the server to reconstruct the TCP-state. Further, this data segment will be retransmitted by the client in case it gets lost and thus is resilient against loss. MPTCP does not include the keys in this data segment and thus the server cannot reconstruct the MPTCP state.

然而,该机制的一个警告是第三个ACK的不可靠性质。实际上,当第三个ACK丢失时,服务器将无法重建MPTCP状态。在这种情况下,MPTCP将退回到常规TCP。这与常规TCP不同。当客户端开始发送数据时,第一个数据段还包括SYN cookie,它允许服务器重建TCP状态。此外,该数据段将由客户端重新传输,以防丢失,因此具有抗丢失的能力。MPTCP不包括此数据段中的密钥,因此服务器无法重建MPTCP状态。

This issue might be considered as a minor one for MPTCP. Losing the third ACK should only happen when packet loss is high; in this case, MPTCP provides a lot of benefits as it can move traffic away from the lossy link. It is undesirable that MPTCP has a higher chance to fall back to regular TCP in those lossy environments.

该问题可能被视为MPTCP的次要问题。只有在数据包丢失率较高时,才能丢失第三个ACK;在这种情况下,MPTCP提供了很多好处,因为它可以将流量从有损链路移开。不希望MPTCP在这些有损环境中有更高的机会回退到常规TCP。

[MPTCP-DEPLOY] discusses this issue and suggests a modified handshake mechanism that ensures reliable delivery of the MP_CAPABLE, following the three-way handshake. This modification will make MPTCP reliable, even in lossy environments when servers need to use SYN-cookies to protect against SYN-flooding attacks.

[MPTCP-DEPLOY]讨论了这个问题,并提出了一种改进的握手机制,以确保在三方握手后可靠地提供MP_功能。此修改将使MPTCP可靠,即使在服务器需要使用SYN Cookie来防止SYN洪泛攻击的有损环境中也是如此。

3.12. Load-Balanced Server Farms
3.12. 负载平衡服务器场

Large-scale server farms typically deploy thousands of servers behind a single virtual IP (VIP). Steering traffic to these servers is done through Layer 4 load-balancers that ensure that a TCP-flow will always be routed to the same server [Presto08].

大型服务器场通常在单个虚拟IP(VIP)后面部署数千台服务器。通过第4层负载平衡器控制到这些服务器的流量,确保TCP流始终路由到同一服务器[Presto08]。

As Multipath TCP uses multiple different TCP subflows to steer the traffic across the different paths, load-balancers need to ensure that all these subflows are routed to the same server. This implies that the load-balancers need to track the MPTCP-related state, allowing them to parse the token in the MP_JOIN and assign those subflows to the appropriate server. However, server farms typically deploy several load-balancers for reliability and capacity reasons. As a TCP subflow might get routed to any of these load-balancers, they would need to synchronize the MPTCP-related state -- a solution that is not feasible on a large scale.

由于多路径TCP使用多个不同的TCP子流来控制不同路径上的流量,负载平衡器需要确保所有这些子流都路由到同一台服务器。这意味着负载平衡器需要跟踪与MPTCP相关的状态,允许它们解析MP_连接中的令牌,并将这些子流分配给适当的服务器。然而,出于可靠性和容量的考虑,服务器场通常会部署多个负载平衡器。由于TCP子流可能被路由到这些负载平衡器中的任何一个,因此它们需要同步MPTCP相关的状态——这是一种在大规模上不可行的解决方案。

The token (carried in the MP_JOIN) contains the information indicating to which MPTCP-session the subflow belongs. As the token is a hash of the key, servers are not able to generate the token in such a way that the token can provide the necessary information to the load-balancers, which would allow them to route TCP subflows to the appropriate server. [MPTCP-LOAD] discusses this issue in detail and suggests two alternative MP_CAPABLE handshakes to overcome these.

令牌(在MP_连接中携带)包含指示子流所属MPTCP会话的信息。由于令牌是密钥的散列,服务器无法以令牌可以向负载平衡器提供必要信息的方式生成令牌,这将允许它们将TCP子流路由到适当的服务器。[MPTCP-LOAD]详细讨论了这个问题,并提出了两种可选的MP_握手方式来克服这些问题。

4. Security Considerations
4. 安全考虑

This informational document discusses use cases and operational experience with Multipath TCP. An extensive analysis of the remaining security issues in the Multipath TCP specification has been published in [RFC7430], together with suggestions for possible solutions.

本信息性文档讨论多路径TCP的用例和操作经验。[RFC7430]中发布了对多路径TCP规范中剩余安全问题的广泛分析,以及可能的解决方案建议。

From a security viewpoint, it is important to note that Multipath TCP, like other multipath solutions such as SCTP, has the ability to send packets belonging to a single connection over different paths. This design feature of Multipath TCP implies that middleboxes that have been deployed on-path assuming that they would observe all the packets exchanged for a given connection in both directions may not function correctly anymore. A typical example are firewalls, Intrusion Detection System (IDS) or deep packet inspections (DPIs) deployed in enterprise networks. Those devices expect to observe all the packets from all TCP connections. With Multipath TCP, those middleboxes may not observe anymore all packets since some of them may follow a different path. The two examples below illustrate typical deployments of such middleboxes. The first example, Figure 8, shows an MPTCP-enabled smartphone attached to both an enterprise and a cellular network. If a Multipath TCP connection is established by the smartphone towards a server, some of the packets sent by the smartphone or the server may be transmitted over the cellular network and thus be invisible for the enterprise middlebox.

从安全角度来看,需要注意的是,多路径TCP与其他多路径解决方案(如SCTP)一样,能够通过不同路径发送属于单个连接的数据包。多路径TCP的这一设计特征意味着,已部署在path上的中间盒(假设它们将观察到在两个方向上为给定连接交换的所有数据包)可能不再正常工作。一个典型的例子是部署在企业网络中的防火墙、入侵检测系统(IDS)或深度数据包检查(DPIs)。这些设备期望观察来自所有TCP连接的所有数据包。使用多路径TCP,这些中间盒可能不再观察所有数据包,因为其中一些数据包可能遵循不同的路径。下面的两个示例说明了此类中间盒的典型部署。第一个示例(图8)显示了连接到企业和蜂窝网络的支持MPTCP的智能手机。如果智能手机建立了指向服务器的多路径TCP连接,则智能手机或服务器发送的一些数据包可能会通过蜂窝网络传输,因此对于企业中间件来说是不可见的。

     smartphone +----- enterprise net --- MBox----+------ server
                |                                 |
                +----- cellular net  -------------+
        
     smartphone +----- enterprise net --- MBox----+------ server
                |                                 |
                +----- cellular net  -------------+
        

Figure 8: Enterprise Middlebox May Not Observe All Packets from Multihomed Host

图8:EnterpriseMidlebox可能无法观察到来自多宿主机的所有数据包

The second example, Figure 9, shows a possible issue when multiple middleboxes are deployed inside a network. For simplicity, we assume that network1 is the default IPv4 path while network2 is the default IPv6 path. A similar issue could occur with per-flow load-balancing such as ECMP [RFC2992]. With regular TCP, all packets from each connection would either pass through Mbox1 or Mbox2. With Multipath TCP, the client can easily establish a subflow over network1 and another over network2 and each middlebox would only observe a part of the traffic of the end-to-end Multipath TCP connection.

第二个示例(图9)显示了在网络中部署多个中间盒时可能出现的问题。为简单起见,我们假设network1是默认的IPv4路径,而network2是默认的IPv6路径。每流负载平衡(如ECMP[RFC2992])也可能出现类似的问题。对于常规TCP,来自每个连接的所有数据包都将通过Mbox1或Mbox2。使用多路径TCP,客户端可以轻松地在network1上建立一个子流,在network2上建立另一个子流,并且每个中间盒只能观察端到端多路径TCP连接的一部分流量。

     client ----R-- network1  --- MBox1 -----R------------- server
                |                            |
                +-- network2  --- MBox2 -----+
        
     client ----R-- network1  --- MBox1 -----R------------- server
                |                            |
                +-- network2  --- MBox2 -----+
        

Figure 9: Interactions between Load-Balancing and Security Middleboxes

图9:负载平衡和安全中间盒之间的交互

In these two cases, it is possible for an attacker to evade some security measures operating on the TCP byte stream and implemented on the middleboxes by controlling the bytes that are actually sent over each subflow and there are tools that ease those kinds of evasion [PZ15] [PT14]. This is not a security issue for Multipath TCP itself

在这两种情况下,攻击者有可能通过控制实际通过每个子流发送的字节来规避在TCP字节流上操作并在中间盒上实施的一些安全措施,并且有一些工具可以缓解这些规避[PZ15][PT14]。这不是多路径TCP本身的安全问题

since Multipath TCP behaves correctly. However, this demonstrates the difficulty of enforcing security policies by relying only on on-path middleboxes instead of enforcing them directly on the endpoints.

因为多路径TCP的行为是正确的。然而,这表明了通过仅依赖路径上的中间盒而不是直接在端点上强制执行安全策略的困难。

5. References
5. 工具书类
5.1. Normative References
5.1. 规范性引用文件

[RFC6182] Ford, A., Raiciu, C., Handley, M., Barre, S., and J. Iyengar, "Architectural Guidelines for Multipath TCP Development", RFC 6182, DOI 10.17487/RFC6182, March 2011, <http://www.rfc-editor.org/info/rfc6182>.

[RFC6182]福特,A.,雷丘,C.,汉德利,M.,巴尔,S.,和J.艾扬格,“多路径TCP开发的体系结构指南”,RFC 6182,DOI 10.17487/RFC6182,2011年3月<http://www.rfc-editor.org/info/rfc6182>.

[RFC6824] Ford, A., Raiciu, C., Handley, M., and O. Bonaventure, "TCP Extensions for Multipath Operation with Multiple Addresses", RFC 6824, DOI 10.17487/RFC6824, January 2013, <http://www.rfc-editor.org/info/rfc6824>.

[RFC6824]Ford,A.,Raiciu,C.,Handley,M.,和O.Bonaventure,“多地址多路径操作的TCP扩展”,RFC 6824DOI 10.17487/RFC68242013年1月<http://www.rfc-editor.org/info/rfc6824>.

5.2. Informative References
5.2. 资料性引用

[BALIA] Peng, Q., Walid, A., Hwang, J., and S. Low, "Multipath TCP: analysis, design, and implementation", IEEE/ACM Trans. on Networking (TON), Volume 24, Issue 1, February 2016.

[BALIA]Peng,Q.,Walid,A.,Hwang,J.,和S.Low,“多路径TCP:分析、设计和实现”,IEEE/ACM Trans。关于网络(TON),第24卷,第1期,2016年2月。

[CACM14] Paasch, C. and O. Bonaventure, "Multipath TCP", Communications of the ACM, 57(4):51-57, April 2014, <http://inl.info.ucl.ac.be/publications/multipath-tcp>.

[CACM14]Paasch,C.和O.Bonaventure,“多路径TCP”,ACM通信,57(4):51-572014年4月<http://inl.info.ucl.ac.be/publications/multipath-tcp>.

[Cellnet12] Paasch, C., Detal, G., Duchene, F., Raiciu, C., and O. Bonaventure, "Exploring Mobile/WiFi Handover with Multipath TCP", ACM SIGCOMM workshop on Cellular Networks (Cellnet12), August 2012, <http://inl.info.ucl.ac.be/publications/ exploring-mobilewifi-handover-multipath-tcp>.

[Cellnet12]Paasch,C.,Detal,G.,Duchene,F.,Raiciu,C.,和O.Bonaventure,“利用多路径TCP探索移动/WiFi切换”,ACM SIGCOMM蜂窝网络研讨会(Cellnet12),2012年8月<http://inl.info.ucl.ac.be/publications/ 探索mobilewifi切换多路径tcp>。

[COMCOM2016] Tran, V., De Coninck, Q., Hesmans, B., Sadre, R., and O. Bonaventure, "Observing real Multipath TCP traffic", Computer Communications, DOI 10.1016/j.comcom.2016.01.014, April 2016, <http://inl.info.ucl.ac.be/publications/ observing-real-multipath-tcp-traffic>.

[COMCOM2016]Tran,V.,De Coninck,Q.,Hesmans,B.,Sadre,R.,和O.Bonaventure,“观察真实的多路径TCP流量”,计算机通信,DOI 10.1016/j.comcom.2016.01.014,2016年4月<http://inl.info.ucl.ac.be/publications/ 观察真实的多路径tcp流量>。

[COMMAG2016] De Coninck, Q., Baerts, M., Hesmans, B., and O. Bonaventure, "Observing Real Smartphone Applications over Multipath TCP", IEEE Communications Magazine Network Testing Series, 54(3), March 2016, <http://inl.info.ucl.ac.be/publications/observing-real-smartphone-applications-over-multipath-tcp>.

[COMMAG2016]De Coninck,Q.,Baerts,M.,Hesmans,B.,和O.Bonaventure,“通过多路径TCP观察真实的智能手机应用”,IEEE通信杂志网络测试系列,54(3),2016年3月<http://inl.info.ucl.ac.be/publications/observing-real-smartphone-applications-over-multipath-tcp>.

[CONEXT12] Khalili, R., Gast, N., Popovic, M., Upadhyay, U., and J. Leboudec, "MPTCP is not Pareto-Optimal: Performance Issues and a Possible Solution", CoNEXT '12: Proceedings of the 8th international conference on Emerging networking experiments and technologies, DOI 10.1145/2413176.2413178, December 2012.

[CONEXT12]Khalili,R.,Gast,N.,Popovic,M.,Upadhyay,U.,和J.Leboudec,“MPTCP不是帕累托最优的:性能问题和可能的解决方案”,CoNEXT'12:第八届新兴网络实验和技术国际会议记录,DOI 10.1145/2413176.2413178,2012年12月。

[CONEXT13] Paasch, C., Khalili, R., and O. Bonaventure, "On the Benefits of Applying Experimental Design to Improve Multipath TCP", Conference on emerging Networking EXperiments and Technologies (CoNEXT), DOI 10.1145/2535372.2535403, December 2013, <http://inl.info.ucl.ac.be/publications/benefits-applying-experimental-design-improve-multipath-tcp>.

[CONEXT13]Paasch,C.,Khalili,R.,和O.Bonaventure,“应用实验设计改善多路径TCP的益处”,新兴网络实验和技术会议(CoNEXT),DOI 10.1145/2535372.2535403,2013年12月, <http://inl.info.ucl.ac.be/publications/benefits-applying-experimental-design-improve-multipath-tcp>.

[CONEXT15] Hesmans, B., Detal, G., Barre, S., Bauduin, R., and O. Bonaventure, "SMAPP: Towards Smart Multipath TCP-enabled APPlications", Proc. Conext 2015, Heidelberg, Germany, December 2015, <http://inl.info.ucl.ac.be/publications/ smapp-towards-smart-multipath-tcp-enabled-applications>.

[CONEXT15]Hesmans,B.,Detal,G.,Barre,S.,Bauduin,R.,和O.Bonaventure,“SMAPP:迈向智能多路径TCP支持应用”,Proc。Conext 2015,德国海德堡,2015年12月<http://inl.info.ucl.ac.be/publications/ smapp面向支持tcp的智能多路径应用>。

[CSWS14] Paasch, C., Ferlin, S., Alay, O., and O. Bonaventure, "Experimental evaluation of multipath TCP schedulers", CSWS '14: Proceedings of the 2014 ACM SIGCOMM workshop on Capacity sharing workshop, DOI 10.1145/2630088.2631977, August 2014.

[CSWS14]Paasch,C.,Ferlin,S.,Alay,O.,和O.Bonaventure,“多路径TCP调度器的实验评估”,CSWS'14:2014年ACM SIGCOMM容量共享研讨会论文集,DOI 10.1145/2630088.2631977,2014年8月。

[DetalMSS] Detal, G., "dynamically adapt mss value", Post on the mptcp-dev mailing list, September 2014, <https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev/ 2014-09/msg00130.html>.

[DetalMSS]Detal,G.,“动态调整mss值”,发布在mptcp开发邮件列表上,2014年9月<https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev/ 2014-09/msg00130.html>。

[FreeBSD-MPTCP] Williams, N., "Multipath TCP For FreeBSD Kernel Patch v0.5", <http://caia.swin.edu.au/urp/newtcp/mptcp>.

[FreeBSD MPTCP]Williams,N.,“FreeBSD内核修补程序v0.5的多路径TCP”<http://caia.swin.edu.au/urp/newtcp/mptcp>.

[GRE-NOTIFY] Leymann, N., Heidemann, C., Wasserman, M., Xue, L., and M. Zhang, "GRE Notifications for Hybrid Access", Work in Progress, draft-lhwxz-gre-notifications-hybrid-access-01, January 2015.

[GRE-NOTIFY]莱曼,北,海德曼,C.,瓦瑟曼,M.,薛,L.,和M.张,“混合接入的GRE通知”,正在进行的工作,草稿-lhwxz-GRE-NOTIFTIFIES-Hybrid-Access-012015年1月。

[HAMPEL] Hampel, G., Rana, A., and T. Klein, "Seamless TCP mobility using lightweight MPTCP proxy", MobiWac '13: Proceedings of the 11th ACM international symposium on Mobility management and wireless access, DOI 10.1145/2508222.2508226, November 2013.

[HAMPEL]HAMPEL,G.,Rana,A.,和T.Klein,“使用轻量级MPTCP代理的无缝TCP移动性”,MobiWac'13:第11届ACM移动性管理和无线接入国际研讨会论文集,DOI 10.1145/2508222.2508226,2013年11月。

[HotMiddlebox13] Hesmans, B., Duchene, F., Paasch, C., Detal, G., and O. Bonaventure, "Are TCP Extensions Middlebox-proof?", CoNEXT workshop Hot Middlebox, December 2013, <http://inl.info.ucl.ac.be/publications/ are-tcp-extensions-middlebox-proof>.

[HotMiddlebox13]Hesmans,B.,Duchene,F.,Paasch,C.,Detal,G.,和O.Bonaventure,“TCP扩展中端箱是否可靠?”,CoNEXT研讨会热中端箱,2013年12月<http://inl.info.ucl.ac.be/publications/ tcp扩展是否具有“防中间包”能力>。

[HotMiddlebox13b] Detal, G., Paasch, C., and O. Bonaventure, "Multipath in the Middle(Box)", HotMiddlebox '13, December 2013, <http://inl.info.ucl.ac.be/publications/ multipath-middlebox>.

[HotMiddlebox 13B]Detal,G.,Paasch,C.,和O.Bonaventure,“中间的多路径(框)”,HotMiddlebox'132013年12月<http://inl.info.ucl.ac.be/publications/ 多路径中间件>。

[HotNets] Raiciu, C., Pluntke, C., Barre, S., Greenhalgh, A., Wischik, D., and M. Handley, "Data center networking with multipath TCP", Hotnetx-IX: Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks Article No. 10, DOI 10.1145/1868447.1868457, October 2010, <http://doi.acm.org/10.1145/1868447.1868457>.

[HotNets]Raiciu,C.,Pluntke,C.,Barre,S.,Greenhalgh,A.,Wischik,D.,和M.Handley,“具有多路径TCP的数据中心网络”,Hotnetx IX:第九届ACM SIGCOMM网络热点研讨会论文集第10期,DOI 10.1145/1868447.1868457,2010年10月<http://doi.acm.org/10.1145/1868447.1868457>.

[HYA-ARCH] Leymann, N., Heidemann, C., Wasserman, M., Xue, L., and M. Zhang, "Hybrid Access Network Architecture", Work in Progress, draft-lhwxz-hybrid-access-network-architecture-02, January 2015.

[HYA-ARCH]莱曼,北,海德曼,C.,瓦瑟曼,M.,薛,L.,和M.张,“混合接入网络架构”,正在进行的工作,草稿-lhwxz-Hybrid-Access-Network-Architecture-022015年1月。

[ICNP12] Cao, Y., Xu, M., and X. Fu, "Delay-based congestion control for multipath TCP", 20th IEEE International Conference on Network Protocols (INCP), DOI 10.1109/ICNP.2012.6459978, October 2012.

[ICNP12]Cao,Y.,Xu,M.,和X.Fu,“基于延迟的多路径TCP拥塞控制”,第20届IEEE网络协议国际会议(INCP),DOI 10.1109/ICNP.2012.6459978,2012年10月。

[IETF88] Stewart, L., "IETF 88 Meeting minutes of the MPTCP working group", November 2013, <https://www.ietf.org/proceedings/ 88/minutes/minutes-88-mptcp>.

[IETF88]Stewart,L.“IETF 88 MPTCP工作组会议记录”,2013年11月<https://www.ietf.org/proceedings/ 88/分钟/分钟-88-mptcp>。

[IETFJ] Bonaventure, O. and S. Seo, "Multipath TCP Deployments", IETF Journal, Vol. 12, Issue 2, November 2016.

[IETFJ]Bonaventure,O.和S.Seo,“多路径TCP部署”,IETF期刊,第12卷,第2期,2016年11月。

[IMC11] Honda, M., Nishida, Y., Raiciu, C., Greenhalgh, A., Handley, M., and H. Tokuda, "Is it still possible to extend TCP?", IMC '11: Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference, DOI 10.1145/2068816.2068834, November 2011, <http://doi.acm.org/10.1145/2068816.2068834>.

[IMC11]本田,M.,西田,Y.,雷丘,C.,格林哈勒,A.,汉德利,M.,和H.德田,“是否仍有可能扩展TCP?”,IMC'11:2011年ACM SIGCOMM互联网测量会议记录,DOI 10.1145/2068816.2068834,2011年11月<http://doi.acm.org/10.1145/2068816.2068834>.

[IMC13a] Detal, G., Hesmans, B., Bonaventure, O., Vanaubel, Y., and B. Donnet, "Revealing Middlebox Interference with Tracebox", Proceedings of the 2013 ACM SIGCOMM conference on Internet measurement conference, DOI 10.1145/2504730.2504757, October 2013, <http://inl.info.ucl.ac.be/publications/ revealing-middlebox-interference-tracebox>.

[IMC13a]Detal,G.,Hesmans,B.,Bonaventure,O.,Vanaubel,Y.,和B.Donnet,“揭示对Tracebox的中间盒干扰”,2013年ACM SIGCOMM互联网测量会议记录,DOI 10.1145/2504730.2504757,2013年10月<http://inl.info.ucl.ac.be/publications/ 显示中间盒干扰tracebox>。

[IMC13b] Chen, Y., Lim, Y., Gibbens, R., Nahum, E., Khalili, R., and D. Towsley, "A measurement-based study of MultiPath TCP performance over wireless network", ICM '13: Proceedings of the 2013 conference on Internet measurement conference, DOI 10.1145/2504730.2504751, October 2013, <http://doi.acm.org/10.1145/2504730.2504751>.

[IMC13b]Chen,Y.,Lim,Y.,Gibbens,R.,Nahum,E.,Khalili,R.,和D.Towsley,“基于测量的无线网络多路径TCP性能研究”,ICM'13:2013年互联网测量会议论文集,DOI 10.1145/2504730.2504751,2013年10月<http://doi.acm.org/10.1145/2504730.2504751>.

[IMC13c] Pelsser, C., Cittadini, L., Vissicchio, S., and R. Bush, "From Paris to Tokyo: on the suitability of ping to measure latency", IMC '13: Proceedings of the 2013 conference on Internet measurement Conference, DOI 10.1145/2504730.2504765, October 2013, <http://doi.acm.org/10.1145/2504730.2504765>.

[IMC13c]Pelsser,C.,Cittadini,L.,Vissicchio,S.,和R.Bush,“从巴黎到东京:ping测量延迟的适用性”,IMC'13:2013年互联网测量会议记录,DOI 10.1145/2504730.2504765,2013年10月<http://doi.acm.org/10.1145/2504730.2504765>.

[INFOCOM14] Lim, Y., Chen, Y., Nahum, E., Towsley, D., and K. Lee, "Cross-layer path management in multi-path transport protocol for mobile devices", IEEE INFOCOM'14, DOI 10.1109/INFOCOM.2014.6848120, April 2014.

[INFOCOM14]Lim,Y.,Chen,Y.,Nahum,E.,Towsley,D.,和K.Lee,“移动设备多路径传输协议中的跨层路径管理”,IEEE INFOCOM'14,DOI 10.1109/INFOCOM.2014.68481202014年4月。

[KT] Seo, S., "KT's GiGA LTE", July 2015, <https://www.ietf.org/proceedings/93/slides/ slides-93-mptcp-3.pdf>.

[KT]Seo,S.,“KT的千兆LTE”,2015年7月<https://www.ietf.org/proceedings/93/slides/ 幻灯片-93-mptcp-3.pdf>。

[MBTest] Hesmans, B., "MBTest", October 2013, <https://bitbucket.org/bhesmans/mbtest>.

[MBTest]Hesmans,B.,“MBTest”,2013年10月<https://bitbucket.org/bhesmans/mbtest>.

[Mobicom15] De Coninck, Q., Baerts, M., Hesmans, B., and O. Bonaventure, "Poster - Evaluating Android Applications with Multipath TCP", Mobicom 2015 (Poster), DOI 10.1145/2789168.2795165, September 2015.

[Mobicom15]De Coninck,Q.,Baerts,M.,Hesmans,B.,和O.Bonaventure,“海报-使用多路径TCP评估Android应用程序”,Mobicom 2015(海报),DOI 10.1145/2789168.2795165,2015年9月。

[MPTCP-DEPLOY] Paasch, C., Biswas, A., and D. Haas, "Making Multipath TCP robust for stateless webservers", Work in Progress, draft-paasch-mptcp-syncookies-02, October 2015.

[MPTCP-DEPLOY]Paasch,C.,Biswas,A.,和D.Haas,“使多路径TCP对无状态Web服务器具有鲁棒性”,正在进行的工作,草稿-Paasch-MPTCP-syncookies-022015年10月。

[MPTCP-LOAD] Paasch, C., Greenway, G., and A. Ford, "Multipath TCP behind Layer-4 loadbalancers", Work in Progress, draft-paasch-mptcp-loadbalancer-00, September 2015.

[MPTCP-LOAD]Paasch,C.,Greenway,G.,和A.Ford,“第4层负载平衡器背后的多路径TCP”,正在进行的工作,草稿-Paasch-MPTCP-loadbalancer-00,2015年9月。

[MPTCP-MAX-SUB] Boucadair, M. and C. Jacquenet, "Negotiating the Maximum Number of Multipath TCP (MPTCP) Subflows", Work in Progress draft-boucadair-mptcp-max-subflow-02, May 2016.

[MPTCP-MAX-SUB]Boucadair,M.和C.Jacquenet,“协商多路径TCP(MPTCP)子流的最大数量”,正在进行的工作草案,草稿-Boucadair-MPTCP-MAX-subflow-02,2016年5月。

[MPTCPBIB] Bonaventure, O., "Multipath TCP - Annotated bibliography", Technical report, April 2015, <https://github.com/obonaventure/mptcp-bib>.

[MPTCPBIB]Bonaventure,O.,“多路径TCP-注释书目”,技术报告,2015年4月<https://github.com/obonaventure/mptcp-bib>.

[MultipathTCP-Linux] Paasch, C., Barre, S., and . et al, "Multipath TCP - Linux Kernel implementation", <http://www.multipath-tcp.org>.

[MultipathTCP Linux]Paasch,C.,Barre,S.,和。等,“多路径TCP-Linux内核实现”<http://www.multipath-tcp.org>.

[NSDI11] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley, "Design, implementation and evaluation of congestion control for multipath TCP", NSDI11: In Proceedings of the 8th USENIX conference on Networked systems design and implementation, 2011.

[NSDI11]Wischik,D.,Raiciu,C.,Greenhalgh,A.,和M.Handley,“多路径TCP拥塞控制的设计、实施和评估”,NSDI11:第八届USENIX网络系统设计和实施会议记录,2011年。

[NSDI12] Raiciu, C., Paasch, C., Barre, S., Ford, A., Honda, M., Duchene, F., Bonaventure, O., and M. Handley, "How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP", NSDI '12: USENIX Symposium of Networked Systems Design and implementation, April 2012, <http://inl.info.ucl.ac.be/publications/how-hard-can-it-be-designing-and-implementing-deployable-multipath-tcp>.

[NSDI12]Raiciu,C.,Paasch,C.,Barre,S.,Ford,A.,Honda,M.,Duchene,F.,Bonaventure,O.,和M.Handley,“这有多难?设计和实现可部署的多路径TCP”,NSDI'12:USENIX网络系统设计和实现研讨会,2012年4月, <http://inl.info.ucl.ac.be/publications/how-hard-can-it-be-designing-and-implementing-deployable-multipath-tcp>.

[PaaschPhD] Paasch, C., "Improving Multipath TCP", Ph.D. Thesis , November 2014, <http://inl.info.ucl.ac.be/publications/ improving-multipath-tcp>.

[PaaschPhD]Paasch,C.,“改进多路径TCP”,博士。论文,2014年11月<http://inl.info.ucl.ac.be/publications/ 改进多路径tcp>。

[PAM2016] De Coninck, Q., Baerts, M., Hesmans, B., and O. Bonaventure, "A First Analysis of Multipath TCP on Smartphones", 17th International Passive and Active Measurements Conference (PAM2016) volume 17, March 2016, <http://inl.info.ucl.ac.be/publications/ first-analysis-multipath-tcp-smartphones>.

[PAM2016]De Coninck,Q.,Baerts,M.,Hesmans,B.,和O.Bonaventure,“智能手机上多径TCP的首次分析”,第17届国际被动和主动测量会议(PAM2016)第17卷,2016年3月<http://inl.info.ucl.ac.be/publications/ 首先分析多路径tcp>。

[PAMS2014] Arzani, B., Gurney, A., Cheng, S., Guerin, R., and B. Loo, "Impact of Path Selection and Scheduling Policies on MPTCP Performance", PAMS2014, DOI 10.1109/WAINA.2014.121, May 2014.

[PAMS2014]Arzani,B.,Gurney,A.,Cheng,S.,Guerin,R.,和B.Loo,“路径选择和调度策略对MPTCP性能的影响”,PAMS2014,DOI 10.1109/WAINA.2014.1212014年5月。

[Presto08] Greenberg, A., Lahiri, P., Maltz, D., Patel, P., and S. Sengupta, "Towards a next generation data center architecture: scalability and commoditization", ACM PRESTO 2008, DOI 10.1145/1397718.1397732, August 2008, <http://dl.acm.org/citation.cfm?id=1397732>.

[Presto08]Greenberg,A.,Lahiri,P.,Maltz,D.,Patel,P.,和S.Sengupta,“迈向下一代数据中心架构:可扩展性和商品化”,ACM PRESTO 2008,DOI 10.1145/1397718.1397732,2008年8月<http://dl.acm.org/citation.cfm?id=1397732>.

[PT14] Pearce, C. and P. Thomas, "Multipath TCP Breaking Today's Networks with Tomorrow's Protocols", Proc. Blackhat Briefings, 2014, <http://www.blackhat.com/docs/ us-14/materials/us-14-Pearce-Multipath-TCP-Breaking-Todays-Networks-With-Tomorrows-Protocols-WP.pdf>.

[PT14]Pearce,C.和P.Thomas,“多路径TCP使用明天的协议打破今天的网络”,Proc。Blackhat简报会,2014年<http://www.blackhat.com/docs/ us-14/materials/us-14-Pearce-Multipath-TCP-Breaking-Todays-Networks-With-Tomorrows-Protocols-WP.pdf>。

[PZ15] Pearce, C. and S. Zeadally, "Ancillary Impacts of Multipath TCP on Current and Future Network Security", IEEE Internet Computing, vol. 19, no. 5, pp. 58-65, DOI 10.1109/MIC.2015.70, September 2015.

[PZ15]Pearce,C.和S.Zeadally,“多路径TCP对当前和未来网络安全的辅助影响”,IEEE互联网计算,第19卷,第5期,第58-65页,DOI 10.1109/MIC.2015.702015年9月。

[RFC1812] Baker, F., Ed., "Requirements for IP Version 4 Routers", RFC 1812, DOI 10.17487/RFC1812, June 1995, <http://www.rfc-editor.org/info/rfc1812>.

[RFC1812]Baker,F.,Ed.,“IP版本4路由器的要求”,RFC 1812,DOI 10.17487/RFC1812,1995年6月<http://www.rfc-editor.org/info/rfc1812>.

[RFC1928] Leech, M., Ganis, M., Lee, Y., Kuris, R., Koblas, D., and L. Jones, "SOCKS Protocol Version 5", RFC 1928, DOI 10.17487/RFC1928, March 1996, <http://www.rfc-editor.org/info/rfc1928>.

[RFC1928]Leech,M.,Ganis,M.,Lee,Y.,Kuris,R.,Koblas,D.,和L.Jones,“SOCKS协议版本5”,RFC 1928,DOI 10.17487/RFC1928,1996年3月<http://www.rfc-editor.org/info/rfc1928>.

[RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path Algorithm", RFC 2992, DOI 10.17487/RFC2992, November 2000, <http://www.rfc-editor.org/info/rfc2992>.

[RFC2992]Hopps,C.,“等成本多路径算法的分析”,RFC 2992,DOI 10.17487/RFC2992,2000年11月<http://www.rfc-editor.org/info/rfc2992>.

[RFC4987] Eddy, W., "TCP SYN Flooding Attacks and Common Mitigations", RFC 4987, DOI 10.17487/RFC4987, August 2007, <http://www.rfc-editor.org/info/rfc4987>.

[RFC4987]Eddy,W.“TCP SYN洪泛攻击和常见缓解措施”,RFC 4987,DOI 10.17487/RFC4987,2007年8月<http://www.rfc-editor.org/info/rfc4987>.

[RFC6356] Raiciu, C., Handley, M., and D. Wischik, "Coupled Congestion Control for Multipath Transport Protocols", RFC 6356, DOI 10.17487/RFC6356, October 2011, <http://www.rfc-editor.org/info/rfc6356>.

[RFC6356]Raiciu,C.,Handley,M.,和D.Wischik,“多路径传输协议的耦合拥塞控制”,RFC 6356,DOI 10.17487/RFC6356,2011年10月<http://www.rfc-editor.org/info/rfc6356>.

[RFC7430] Bagnulo, M., Paasch, C., Gont, F., Bonaventure, O., and C. Raiciu, "Analysis of Residual Threats and Possible Fixes for Multipath TCP (MPTCP)", RFC 7430, DOI 10.17487/RFC7430, July 2015, <http://www.rfc-editor.org/info/rfc7430>.

[RFC7430]Bagnulo,M.,Paasch,C.,Gont,F.,Bonaventure,O.,和C.Raiciu,“多路径TCP(MPTCP)的残余威胁和可能修复的分析”,RFC 7430,DOI 10.17487/RFC7430,2015年7月<http://www.rfc-editor.org/info/rfc7430>.

[RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W. Kumari, "Client Subnet in DNS Queries", RFC 7871, DOI 10.17487/RFC7871, May 2016, <http://www.rfc-editor.org/info/rfc7871>.

[RFC7871]Contavalli,C.,van der Gaast,W.,Lawrence,D.,和W.Kumari,“DNS查询中的客户端子网”,RFC 7871,DOI 10.17487/RFC7871,2016年5月<http://www.rfc-editor.org/info/rfc7871>.

[SIGCOMM11] Raiciu, C., Barre, S., Pluntke, C., Greenhalgh, A., Wischik, D., and M. Handley, "Improving datacenter performance and robustness with multipath TCP", SIGCOMM '11: Proceedings of the ACM SIGCOMM 2011 conference, DOI 10.1145/2018436.2018467, August 2011, <http://doi.acm.org/10.1145/2018436.2018467>.

[SIGCOM11]Raiciu,C.,Barre,S.,Pluntke,C.,Greenhalgh,A.,Wischik,D.,和M.Handley,“利用多路径TCP改进数据中心性能和健壮性”,SIGCOM11:ACM SIGCOM2011年会议记录,DOI 10.1145/2018436.2018467,2011年8月<http://doi.acm.org/10.1145/2018436.2018467>.

[SOCKET] Hesmans, B. and O. Bonaventure, "An enhanced socket API for Multipath TCP", Proceedings of the 2016 Applied Networking Research Workshop, DOI 10.1145/2959424.2959433, July 2016, <http://doi.acm.org/10.1145/2959424.2959433>.

[SOCKET]Hesmans,B.和O.Bonaventure,“用于多路径TCP的增强SOCKET API”,2016年应用网络研究研讨会论文集,DOI 10.1145/2959424.29594332016年7月<http://doi.acm.org/10.1145/2959424.2959433>.

[StrangeMbox] Bonaventure, O., "Multipath TCP through a strange middlebox", Blog post, January 2015, <http://blog.multipath-tcp.org/blog/html/2015/01/30/ multipath_tcp_through_a_strange_middlebox.html>.

[StrangeMbox]Bonaventure,O.,“通过一个奇怪的中间盒的多路径TCP”,博客文章,2015年1月<http://blog.multipath-tcp.org/blog/html/2015/01/30/ 多路径\u tcp\u通过\u a\u奇怪的\u middlebox.html>。

[TMA2015] Hesmans, B., Tran Viet, H., Sadre, R., and O. Bonaventure, "A First Look at Real Multipath TCP Traffic", Traffic Monitoring and Analysis, 2015, <http://inl.info.ucl.ac.be/publications/ first-look-real-multipath-tcp-traffic>.

[TMA2015]Hesmans,B.,Tran Vieta,H.,Sadre,R.,和O.Bonaventure,“真实多路径TCP流量的第一眼”,流量监测和分析,2015<http://inl.info.ucl.ac.be/publications/ 首先看看真正的多路径tcp流量>。

[TR-348] Broadband Forum, ., "TR 348 - Hybrid Access Broadband Network Architecture", Issue: 1, July 2016, <https://www.broadband-forum.org/technical/download/ TR-348.pdf>.

[TR-348]宽带论坛,“TR 348-混合接入宽带网络架构”,2016年7月1日发行<https://www.broadband-forum.org/technical/download/ TR-348.pdf>。

[tracebox] Detal, G. and O. Tilmans, "Tracebox: A Middlebox Detection Tool", 2013, <http://www.tracebox.org>.

[tracebox]Detal,G.和O.Tilmans,“tracebox:Middlebox检测工具”,2013年<http://www.tracebox.org>.

Acknowledgements

致谢

This work was partially supported by the FP7-Trilogy2 project. We would like to thank all the implementers and users of the Multipath TCP implementation in the Linux kernel. This document has benefited from the comments of John Ronan, Yoshifumi Nishida, Phil Eardley, Jaehyun Hwang, Mirja Kuehlewind, Benoit Claise, Jari Arkko, Qin Wu, Spencer Dawkins, and Ben Campbell.

这项工作得到了FP7-Trilogy2项目的部分支持。我们要感谢Linux内核中多路径TCP实现的所有实现者和用户。本文件得益于约翰·罗南、西田佳文、菲尔·埃尔德利、黄杰云、米佳·库勒温德、贝诺特·克莱斯、贾里·阿尔科、秦武、斯宾塞·道金斯和本·坎贝尔的评论。

Authors' Addresses

作者地址

Olivier Bonaventure UCLouvain

奥利维尔·博纳鲁万

   Email: Olivier.Bonaventure@uclouvain.be
        
   Email: Olivier.Bonaventure@uclouvain.be
        

Christoph Paasch Apple, Inc.

克里斯托夫·帕斯苹果公司。

   Email: cpaasch@apple.com
        
   Email: cpaasch@apple.com
        

Gregory Detal Tessares

格雷戈里·德塔尔·特萨雷斯

   Email: Gregory.Detal@tessares.net
        
   Email: Gregory.Detal@tessares.net