Internet Engineering Task Force (IETF)                     F. Baker, Ed.
Request for Comments: 7567                                 Cisco Systems
BCP: 197                                               G. Fairhurst, Ed.
Obsoletes: 2309                                   University of Aberdeen
Category: Best Current Practice                                July 2015
ISSN: 2070-1721
        
Internet Engineering Task Force (IETF)                     F. Baker, Ed.
Request for Comments: 7567                                 Cisco Systems
BCP: 197                                               G. Fairhurst, Ed.
Obsoletes: 2309                                   University of Aberdeen
Category: Best Current Practice                                July 2015
ISSN: 2070-1721
        

IETF Recommendations Regarding Active Queue Management

IETF关于主动队列管理的建议

Abstract

摘要

This memo presents recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management (AQM) in network devices to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of AQM mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.

本备忘录向互联网社区提出了有关改进和保持互联网性能的措施的建议。它强烈建议在网络设备中测试、标准化和广泛部署主动队列管理(AQM),以提高当今互联网的性能。它还敦促各方共同努力,研究、测量和最终部署AQM机制,以保护互联网免受对拥塞通知响应不够的流量的影响。

Based on 15 years of experience and new research, this document replaces the recommendations of RFC 2309.

基于15年的经验和新研究,本文件取代RFC 2309的建议。

Status of This Memo

关于下段备忘

This memo documents an Internet Best Current Practice.

本备忘录记录了互联网最佳实践。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on BCPs is available in Section 2 of RFC 5741.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。有关BCP的更多信息,请参见RFC 5741第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7567.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc7567.

Copyright Notice

版权公告

Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2015 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.

本文件可能包含2008年11月10日之前发布或公开的IETF文件或IETF贡献中的材料。控制某些材料版权的人员可能未授予IETF信托允许在IETF标准流程之外修改此类材料的权利。在未从控制此类材料版权的人员处获得充分许可的情况下,不得在IETF标准流程之外修改本文件,也不得在IETF标准流程之外创建其衍生作品,除了将其格式化以RFC形式发布或将其翻译成英语以外的其他语言。

Table of Contents

目录

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
     1.1.  Congestion Collapse . . . . . . . . . . . . . . . . . . .   4
     1.2.  Active Queue Management to Manage Latency . . . . . . . .   5
     1.3.  Document Overview . . . . . . . . . . . . . . . . . . . .   6
     1.4.  Changes to the Recommendations of RFC 2309  . . . . . . .   7
     1.5.  Requirements Language . . . . . . . . . . . . . . . . . .   7
   2.  The Need for Active Queue Management  . . . . . . . . . . . .   7
     2.1.  AQM and Multiple Queues . . . . . . . . . . . . . . . . .  11
     2.2.  AQM and Explicit Congestion Marking (ECN) . . . . . . . .  12
     2.3.  AQM and Buffer Size . . . . . . . . . . . . . . . . . . .  12
   3.  Managing Aggressive Flows . . . . . . . . . . . . . . . . . .  13
   4.  Conclusions and Recommendations . . . . . . . . . . . . . . .  16
     4.1.  Operational Deployments SHOULD Use AQM Procedures . . . .  17
     4.2.  Signaling to the Transport Endpoints  . . . . . . . . . .  17
       4.2.1.  AQM and ECN . . . . . . . . . . . . . . . . . . . . .  18
     4.3.  AQM Algorithm Deployment SHOULD NOT Require Operational
           Tuning  . . . . . . . . . . . . . . . . . . . . . . . . .  20
     4.4.  AQM Algorithms SHOULD Respond to Measured Congestion, Not
           Application Profiles  . . . . . . . . . . . . . . . . . .  21
     4.5.  AQM Algorithms SHOULD NOT Be Dependent on Specific
           Transport Protocol Behaviors  . . . . . . . . . . . . . .  22
     4.6.  Interactions with Congestion Control Algorithms . . . . .  22
     4.7.  The Need for Further Research . . . . . . . . . . . . . .  23
   5.  Security Considerations . . . . . . . . . . . . . . . . . . .  25
   6.  Privacy Considerations  . . . . . . . . . . . . . . . . . . .  25
   7.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  25
     7.1.  Normative References  . . . . . . . . . . . . . . . . . .  25
     7.2.  Informative References  . . . . . . . . . . . . . . . . .  26
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  31
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  31
        
   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
     1.1.  Congestion Collapse . . . . . . . . . . . . . . . . . . .   4
     1.2.  Active Queue Management to Manage Latency . . . . . . . .   5
     1.3.  Document Overview . . . . . . . . . . . . . . . . . . . .   6
     1.4.  Changes to the Recommendations of RFC 2309  . . . . . . .   7
     1.5.  Requirements Language . . . . . . . . . . . . . . . . . .   7
   2.  The Need for Active Queue Management  . . . . . . . . . . . .   7
     2.1.  AQM and Multiple Queues . . . . . . . . . . . . . . . . .  11
     2.2.  AQM and Explicit Congestion Marking (ECN) . . . . . . . .  12
     2.3.  AQM and Buffer Size . . . . . . . . . . . . . . . . . . .  12
   3.  Managing Aggressive Flows . . . . . . . . . . . . . . . . . .  13
   4.  Conclusions and Recommendations . . . . . . . . . . . . . . .  16
     4.1.  Operational Deployments SHOULD Use AQM Procedures . . . .  17
     4.2.  Signaling to the Transport Endpoints  . . . . . . . . . .  17
       4.2.1.  AQM and ECN . . . . . . . . . . . . . . . . . . . . .  18
     4.3.  AQM Algorithm Deployment SHOULD NOT Require Operational
           Tuning  . . . . . . . . . . . . . . . . . . . . . . . . .  20
     4.4.  AQM Algorithms SHOULD Respond to Measured Congestion, Not
           Application Profiles  . . . . . . . . . . . . . . . . . .  21
     4.5.  AQM Algorithms SHOULD NOT Be Dependent on Specific
           Transport Protocol Behaviors  . . . . . . . . . . . . . .  22
     4.6.  Interactions with Congestion Control Algorithms . . . . .  22
     4.7.  The Need for Further Research . . . . . . . . . . . . . .  23
   5.  Security Considerations . . . . . . . . . . . . . . . . . . .  25
   6.  Privacy Considerations  . . . . . . . . . . . . . . . . . . .  25
   7.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  25
     7.1.  Normative References  . . . . . . . . . . . . . . . . . .  25
     7.2.  Informative References  . . . . . . . . . . . . . . . . .  26
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  31
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  31
        
1. Introduction
1. 介绍

The Internet protocol architecture is based on a connectionless end-to-end packet service using the Internet Protocol, whether IPv4 [RFC791] or IPv6 [RFC2460]. The advantages of its connectionless design -- flexibility and robustness -- have been amply demonstrated. However, these advantages are not without cost: careful design is required to provide good service under heavy load. In fact, lack of attention to the dynamics of packet forwarding can result in severe service degradation or "Internet meltdown". This phenomenon was first observed during the early growth phase of the Internet in the mid 1980s [RFC896] [RFC970]; it is technically called "congestion collapse" and was a key focus of RFC 2309.

Internet协议体系结构基于使用Internet协议的无连接端到端分组服务,无论是IPv4[RFC791]还是IPv6[RFC2460]。它的无连接设计的优点——灵活性和健壮性——已经得到了充分的证明。然而,这些优点并非没有成本:需要仔细设计,以便在重载下提供良好的服务。事实上,缺乏对数据包转发动态的关注可能会导致严重的服务降级或“互联网崩溃”。这种现象最早出现在20世纪80年代中期互联网的早期发展阶段[RFC896][RFC970];它在技术上被称为“拥塞崩溃”,是RFC2309的一个关键焦点。

Although wide-scale congestion collapse is not common in the Internet, the presence of localized congestion collapse is by no means rare. It is therefore important to continue to avoid congestion collapse.

尽管大规模的拥塞崩溃在互联网上并不常见,但局部拥塞崩溃的出现绝非罕见。因此,继续避免拥堵崩溃非常重要。

Since 1998, when RFC 2309 was written, the Internet has become used for a variety of traffic. In the current Internet, low latency is extremely important for many interactive and transaction-based applications. The same type of technology that RFC 2309 advocated for combating congestion collapse is also effective at limiting delays to reduce the interaction delay (latency) experienced by applications [Bri15]. High or unpredictable latency can impact the performance of the control loops used by end-to-end protocols (including congestion control algorithms using TCP). There is now also a focus on reducing network latency using the same technology.

自1998年RFC2309编写以来,互联网已被用于各种通信。在当前的互联网中,低延迟对于许多交互式和基于事务的应用程序来说非常重要。RFC 2309倡导的用于对抗拥塞崩溃的同一类型技术也能有效地限制延迟,以减少应用程序所经历的交互延迟(延迟)[15]。高或不可预测的延迟可能会影响端到端协议(包括使用TCP的拥塞控制算法)使用的控制环路的性能。现在,人们还关注使用相同的技术减少网络延迟。

The mechanisms described in this document may be implemented in network devices on the path between endpoints that include routers, switches, and other network middleboxes. The methods may also be implemented in the networking stacks within endpoint devices that connect to the network.

本文档中描述的机制可以在包括路由器、交换机和其他网络中间盒的端点之间的路径上的网络设备中实现。这些方法还可以在连接到网络的端点设备内的网络栈中实现。

1.1. Congestion Collapse
1.1. 拥挤崩溃

The original fix for Internet meltdown was provided by Van Jacobsen. Beginning in 1986, Jacobsen developed the congestion avoidance mechanisms [Jacobson88] that are now required for implementations of the Transport Control Protocol (TCP) [RFC793] [RFC1122]. ([RFC7414] provides a roadmap to help identify TCP-related documents.) These mechanisms operate in Internet hosts to cause TCP connections to "back off" during congestion. We say that TCP flows are "responsive" to congestion signals (i.e., packets that are dropped or marked with explicit congestion notification [RFC3168]). It is primarily these

互联网崩溃的最初修复方案是由Van Jacobsen提供的。从1986年开始,Jacobsen开发了拥塞避免机制[Jacobson88],现在实现传输控制协议(TCP)[RFC793][RFC1122]需要这些机制。([RFC7414]提供了帮助识别TCP相关文档的路线图。)这些机制在Internet主机中运行,导致TCP连接在拥塞期间“后退”。我们说TCP流“响应”拥塞信号(即丢弃或标记为显式拥塞通知[RFC3168])。主要是这些

TCP congestion avoidance algorithms that prevent the congestion collapse of today's Internet. Similar algorithms are specified for other non-TCP transports.

TCP拥塞避免算法,防止当今互联网的拥塞崩溃。为其他非TCP传输指定了类似的算法。

However, that is not the end of the story. Considerable research has been done on Internet dynamics since 1988, and the Internet has grown. It has become clear that the congestion avoidance mechanisms [RFC5681], while necessary and powerful, are not sufficient to provide good service in all circumstances. Basically, there is a limit to how much control can be accomplished from the edges of the network. Some mechanisms are needed in network devices to complement the endpoint congestion avoidance mechanisms. These mechanisms may be implemented in network devices.

然而,这并不是故事的结局。自1988年以来,人们对互联网动态进行了大量的研究,互联网也得到了发展。显然,拥塞避免机制[RFC5681]虽然必要且强大,但不足以在所有情况下提供良好的服务。基本上,从网络边缘可以完成的控制量是有限的。网络设备中需要一些机制来补充端点拥塞避免机制。这些机制可以在网络设备中实现。

1.2. Active Queue Management to Manage Latency
1.2. 用于管理延迟的主动队列管理

Internet latency has become a focus of attention to increase the responsiveness of Internet applications and protocols. One major source of delay is the buildup of queues in network devices. Queueing occurs whenever the arrival rate of data at the ingress to a device exceeds the current egress rate. Such queueing is normal in a packet-switched network and is often necessary to absorb bursts in transmission and perform statistical multiplexing of traffic, but excessive queueing can lead to unwanted delay, reducing the performance of some Internet applications.

为了提高Internet应用程序和协议的响应速度,Internet延迟已成为人们关注的焦点。延迟的一个主要来源是网络设备中队列的累积。每当设备入口的数据到达率超过当前出口率时,就会发生排队。这种排队在分组交换网络中是正常的,并且通常是吸收传输中的突发和执行业务统计复用所必需的,但是过度排队可能导致不必要的延迟,从而降低某些Internet应用程序的性能。

RFC 2309 introduced the concept of "Active Queue Management" (AQM), a class of technologies that, by signaling to common congestion-controlled transports such as TCP, manages the size of queues that build in network buffers. RFC 2309 also describes a specific AQM algorithm, Random Early Detection (RED), and recommends that this be widely implemented and used by default in routers.

RFC2309引入了“主动队列管理”(AQM)的概念,这是一类通过向TCP等常见拥塞控制传输发送信号来管理网络缓冲区中队列大小的技术。RFC2309还描述了一种特定的AQM算法,即随机早期检测(RED),并建议在路由器中广泛实现和默认使用该算法。

With an appropriate set of parameters, RED is an effective algorithm. However, dynamically predicting this set of parameters was found to be difficult. As a result, RED has not been enabled by default, and its present use in the Internet is limited. Other AQM algorithms have been developed since RFC 2309 was published, some of which are self-tuning within a range of applicability. Hence, while this memo continues to recommend the deployment of AQM, it no longer recommends that RED or any other specific algorithm is used by default. It instead provides recommendations on IETF processes for the selection of appropriate algorithms, and especially that a recommended algorithm is able to automate any required tuning for common deployment scenarios.

如果有一组合适的参数,RED是一种有效的算法。然而,动态预测这组参数是困难的。因此,RED在默认情况下未启用,目前在Internet上的使用受到限制。自RFC2309发布以来,已经开发了其他AQM算法,其中一些算法在适用范围内是自调整的。因此,尽管本备忘录继续建议部署AQM,但不再建议默认使用RED或任何其他特定算法。相反,它提供了关于IETF过程的建议,以选择适当的算法,特别是推荐的算法能够自动进行常见部署场景所需的任何调优。

Deploying AQM in the network can significantly reduce the latency across an Internet path, and, since the writing of RFC 2309, this has become a key motivation for using AQM in the Internet. In the context of AQM, it is useful to distinguish between two related classes of algorithms: "queue management" versus "scheduling" algorithms. To a rough approximation, queue management algorithms manage the length of packet queues by marking or dropping packets when necessary or appropriate, while scheduling algorithms determine which packet to send next and are used primarily to manage the allocation of bandwidth among flows. While these two mechanisms are closely related, they address different performance issues and operate on different timescales. Both may be used in combination.

在网络中部署AQM可以显著减少互联网路径上的延迟,并且自RFC 2309编写以来,这已成为在互联网中使用AQM的一个关键动机。在AQM的上下文中,区分两类相关的算法很有用:“队列管理”和“调度”算法。大致来说,队列管理算法通过在必要或适当时标记或丢弃数据包来管理数据包队列的长度,而调度算法则确定下一个要发送的数据包,主要用于管理流之间的带宽分配。虽然这两种机制密切相关,但它们解决不同的性能问题,并在不同的时间尺度上运行。两者都可以组合使用。

1.3. Document Overview
1.3. 文件概述

The discussion in this memo applies to "best-effort" traffic, which is to say, traffic generated by applications that accept the occasional loss, duplication, or reordering of traffic in flight. It also applies to other traffic, such as real-time traffic that can adapt its sending rate to reduce loss and/or delay. It is most effective when the adaption occurs on timescales of a single Round-Trip Time (RTT) or a small number of RTTs, for elastic traffic [RFC1633].

本备忘录中的讨论适用于“尽力而为”流量,也就是说,由接受飞行中流量偶尔丢失、重复或重新排序的应用程序生成的流量。它还适用于其他流量,例如可以调整其发送速率以减少损失和/或延迟的实时流量。对于弹性流量,当自适应发生在单个往返时间(RTT)或少量RTT的时间尺度上时,最有效[RFC1633]。

Two performance issues are highlighted:

突出了两个性能问题:

The first issue is the need for an advanced form of queue management that we call "Active Queue Management", AQM. Section 2 summarizes the benefits that active queue management can bring. A number of AQM procedures are described in the literature, with different characteristics. This document does not recommend any of them in particular, but it does make recommendations that ideally would affect the choice of procedure used in a given implementation.

第一个问题是需要一种高级形式的队列管理,我们称之为“主动队列管理”,即AQM。第2节总结了主动队列管理可以带来的好处。文献中描述了许多AQM程序,具有不同的特点。本文件没有特别推荐其中任何一项,但它确实提出了一些建议,这些建议在理想情况下会影响在给定实施中使用的程序的选择。

The second issue, discussed in Section 4 of this memo, is the potential for future congestion collapse of the Internet due to flows that are unresponsive, or not sufficiently responsive, to congestion indications. Unfortunately, while scheduling can mitigate some of the side effects of sharing a network queue with an unresponsive flow, there is currently no consensus solution to controlling the congestion caused by such aggressive flows. Methods such as congestion exposure (ConEx) [RFC6789] offer a framework [CONEX] that can update network devices to alleviate these effects. Significant research and engineering will be required before any solution will be available. It is imperative that work to mitigate the impact of unresponsive flows is energetically pursued to ensure acceptable performance and the future stability of the Internet.

本备忘录第4节讨论的第二个问题是,由于流量对拥塞指示无响应或响应不足,未来互联网拥塞崩溃的可能性。不幸的是,虽然调度可以减轻与无响应流共享网络队列的一些副作用,但目前还没有一致的解决方案来控制这种攻击性流造成的拥塞。拥塞暴露(ConEx)[RFC6789]等方法提供了一个框架[ConEx],可以更新网络设备以减轻这些影响。在提供任何解决方案之前,需要进行大量的研究和工程设计。必须积极开展减轻无响应流量影响的工作,以确保互联网的可接受性能和未来稳定性。

Section 4 concludes the memo with a set of recommendations to the Internet community on the use of AQM and recommendations for defining AQM algorithms.

第4节总结了备忘录,并就AQM的使用向互联网社区提出了一系列建议,以及定义AQM算法的建议。

1.4. Changes to the Recommendations of RFC 2309
1.4. RFC 2309建议的变更

This memo replaces the recommendations in [RFC2309], which resulted from past discussions of end-to-end performance, Internet congestion, and RED in the End-to-End Research Group of the Internet Research Task Force (IRTF). It results from experience with RED and other algorithms, and the AQM discussion within the IETF [AQM-WG].

本备忘录取代了[RFC2309]中的建议,该建议源于互联网研究任务组(IRTF)端到端研究小组过去对端到端性能、互联网拥塞和RED的讨论。它源于RED和其他算法的经验,以及IETF[AQM-WG]中的AQM讨论。

Whereas RFC 2309 described AQM in terms of the length of a queue, this memo uses AQM to refer to any method that allows network devices to control the queue length and/or the mean time that a packet spends in a queue.

鉴于RFC 2309根据队列长度描述了AQM,本备忘录使用AQM来指代允许网络设备控制队列长度和/或数据包在队列中花费的平均时间的任何方法。

This memo also explicitly obsoletes the recommendation that Random Early Detection (RED) be used as the default AQM mechanism for the Internet. This is replaced by a detailed set of recommendations for selecting an appropriate AQM algorithm. As in RFC 2309, this memo illustrates the need for continued research. It also clarifies the research needed with examples appropriate at the time that this memo is published.

该备忘录还明确废除了将随机早期检测(RED)用作互联网默认AQM机制的建议。取而代之的是一套详细的建议,用于选择适当的AQM算法。与RFC 2309一样,本备忘录说明了继续研究的必要性。本备忘录还阐明了本备忘录发布时所需的研究,并列举了适当的例子。

1.5. Requirements Language
1.5. 需求语言

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

本文件中的关键词“必须”、“不得”、“必需”、“应”、“不应”、“应”、“不应”、“建议”、“可”和“可选”应按照[RFC2119]中所述进行解释。

2. The Need for Active Queue Management
2. 主动队列管理的必要性

Active Queue Management (AQM) is a method that allows network devices to control the queue length or the mean time that a packet spends in a queue. Although AQM can be applied across a range of deployment environments, the recommendations in this document are for use in the general Internet. It is expected that the principles and guidance are also applicable to a wide range of environments, but they may require tuning for specific types of links or networks (e.g., to accommodate the traffic patterns found in data centers, the challenges of wireless infrastructure, or the higher delay encountered on satellite Internet links). The remainder of this section identifies the need for AQM and the advantages of deploying AQM methods.

主动队列管理(AQM)是一种允许网络设备控制队列长度或数据包在队列中花费的平均时间的方法。虽然AQM可以应用于一系列部署环境,但本文档中的建议适用于通用Internet。预计原则和指南也适用于广泛的环境,但可能需要针对特定类型的链路或网络进行调整(例如,适应数据中心的流量模式、无线基础设施的挑战或卫星互联网链路上遇到的更高延迟)。本节的其余部分确定了AQM的需求以及部署AQM方法的优势。

The traditional technique for managing the queue length in a network device is to set a maximum length (in terms of packets) for each queue, accept packets for the queue until the maximum length is reached, then reject (drop) subsequent incoming packets until the queue decreases because a packet from the queue has been transmitted. This technique is known as "tail drop", since the packet that arrived most recently (i.e., the one on the tail of the queue) is dropped when the queue is full. This method has served the Internet well for years, but it has four important drawbacks:

在网络设备中管理队列长度的传统技术是为每个队列设置最大长度(以数据包为单位),接受队列的数据包直到达到最大长度,然后拒绝(丢弃)后续传入的数据包,直到队列减少,因为队列中的数据包已被传输。这种技术被称为“尾部丢弃”,因为最近到达的数据包(即队列尾部的数据包)在队列已满时被丢弃。这种方法多年来一直在互联网上使用,但它有四个重要的缺点:

1. Full Queues

1. 满队

The "tail drop" discipline allows queues to maintain a full (or, almost full) status for long periods of time, since tail drop signals congestion (via a packet drop) only when the queue has become full. It is important to reduce the steady-state queue size, and this is perhaps the most important goal for queue management.

“尾部丢弃”原则允许队列长时间保持完全(或几乎完全)状态,因为尾部丢弃仅在队列已满时发出拥塞信号(通过数据包丢弃)。减少稳态队列大小很重要,这可能是队列管理最重要的目标。

The naive assumption might be that there is a simple trade-off between delay and throughput, and that the recommendation that queues be maintained in a "non-full" state essentially translates to a recommendation that low end-to-end delay is more important than high throughput. However, this does not take into account the critical role that packet bursts play in Internet performance. For example, even though TCP constrains the congestion window of a flow, packets often arrive at network devices in bursts [Leland94]. If the queue is full or almost full, an arriving burst will cause multiple packets to be dropped from the same flow. Bursts of loss can result in a global synchronization of flows throttling back, followed by a sustained period of lowered link utilization, reducing overall throughput [Flo94] [Zha90].

天真的假设可能是,在延迟和吞吐量之间存在一个简单的权衡,将队列保持在“非满”状态的建议本质上转化为低端到端延迟比高吞吐量更重要的建议。然而,这并没有考虑到数据包突发在互联网性能中所起的关键作用。例如,尽管TCP限制流的拥塞窗口,但数据包通常以突发方式到达网络设备[Leland94]。如果队列已满或几乎已满,到达的突发将导致多个数据包从同一流中丢弃。突发性丢失可能导致流的全局同步节流,然后是链路利用率的持续降低,从而降低总体吞吐量[Flo94][Zha90]。

The goal of buffering in the network is to absorb data bursts and to transmit them during the (hopefully) ensuing bursts of silence. This is essential to permit transmission of bursts of data. Queues that are normally small are preferred in network devices, with sufficient queue capacity to absorb the bursts. The counterintuitive result is that maintaining queues that are normally small can result in higher throughput as well as lower end-to-end delay. In summary, queue limits should not reflect the steady-state queues we want to be maintained in the network; instead, they should reflect the size of bursts that a network device needs to absorb.

网络中缓冲的目标是吸收数据突发,并在(希望)随后的突发沉默期间传输它们。这对于允许突发数据的传输至关重要。通常较小的队列是网络设备中的首选队列,具有足够的队列容量来吸收突发。与直觉相反的结果是,保持通常较小的队列可以带来更高的吞吐量和更低的端到端延迟。总之,队列限制不应反映我们希望在网络中保持的稳态队列;相反,它们应该反映网络设备需要吸收的突发的大小。

2. Lock-Out

2. 封锁

In some situations tail drop allows a single connection or a few flows to monopolize the queue space, thereby starving other connections, preventing them from getting room in the queue [Flo92].

在某些情况下,尾部丢弃允许单个连接或几个流独占队列空间,从而使其他连接处于饥饿状态,从而阻止它们在队列中获得空间[Flo92]。

3. Mitigating the Impact of Packet Bursts

3. 减轻数据包突发的影响

A large burst of packets can delay other packets, disrupting the control loop (e.g., the pacing of flows by the TCP ACK clock), and reducing the performance of flows that share a common bottleneck.

大数据包突发可能会延迟其他数据包,从而中断控制循环(例如,TCP ACK时钟对数据流的调整),并降低共享一个共同瓶颈的数据流的性能。

4. Control Loop Synchronization

4. 控制回路同步

Congestion control, like other end-to-end mechanisms, introduces a control loop between hosts. Sessions that share a common network bottleneck can therefore become synchronized, introducing periodic disruption (e.g., jitter/loss). "Lock-out" is often also the result of synchronization or other timing effects

与其他端到端机制一样,拥塞控制在主机之间引入了一个控制循环。因此,共享一个共同网络瓶颈的会话可以同步,从而引入周期性中断(例如抖动/丢失)。“锁定”通常也是同步或其他时间效应的结果

Besides tail drop, two alternative queue management disciplines that can be applied when a queue becomes full are "random drop on full" or "head drop on full". When a new packet arrives at a full queue using the "random drop on full" discipline, the network device drops a randomly selected packet from the queue (this can be an expensive operation, since it naively requires an O(N) walk through the packet queue). When a new packet arrives at a full queue using the "head drop on full" discipline, the network device drops the packet at the front of the queue [Lakshman96]. Both of these solve the lock-out problem, but neither solves the full-queues problem described above.

除了尾部下降,当队列满时可以应用的两种替代队列管理规则是“满时随机下降”或“满时头部下降”。当新数据包使用“完全随机丢弃”原则到达完整队列时,网络设备从队列中丢弃随机选择的数据包(这可能是一个昂贵的操作,因为它需要在数据包队列中进行O(N)遍历)。当一个新的数据包使用“完整的头部丢弃”原则到达一个完整的队列时,网络设备将数据包丢弃在队列前面[Lakshman96]。这两种方法都解决了锁定问题,但都不能解决上述的完全队列问题。

In general, we know how to solve the full-queues problem for "responsive" flows, i.e., those flows that throttle back in response to congestion notification. In the current Internet, dropped packets provide a critical mechanism indicating congestion notification to hosts. The solution to the full-queues problem is for network devices to drop or ECN-mark packets before a queue becomes full, so that hosts can respond to congestion before buffers overflow. We call such a proactive approach AQM. By dropping or ECN-marking packets before buffers overflow, AQM allows network devices to control when and how many packets to drop.

通常,我们知道如何解决“响应”流(即响应拥塞通知而节流的流)的满队列问题。在当前的互联网上,丢弃的数据包提供了一种关键的机制,向主机发出拥塞通知。完整队列问题的解决方案是,网络设备在队列变满之前丢弃或ECN标记数据包,以便主机能够在缓冲区溢出之前响应拥塞。我们称这种主动的方法为AQM。通过在缓冲区溢出之前丢弃或ECN标记数据包,AQM允许网络设备控制何时丢弃数据包以及丢弃多少数据包。

In summary, an active queue management mechanism can provide the following advantages for responsive flows.

总之,主动队列管理机制可以为响应流提供以下优势。

1. Reduce number of packets dropped in network devices

1. 减少网络设备中丢弃的数据包数量

Packet bursts are an unavoidable aspect of packet networks [Willinger95]. If all the queue space in a network device is already committed to "steady-state" traffic or if the buffer space is inadequate, then the network device will have no ability to buffer bursts. By keeping the average queue size small, AQM will provide greater capacity to absorb naturally occurring bursts without dropping packets.

分组突发是分组网络不可避免的一个方面[Willinger95]。如果网络设备中的所有队列空间已经提交给“稳态”流量,或者如果缓冲空间不足,则网络设备将无法缓冲突发。通过保持平均队列大小较小,AQM将提供更大的容量来吸收自然发生的突发,而不会丢弃数据包。

Furthermore, without AQM, more packets will be dropped when a queue does overflow. This is undesirable for several reasons. First, with a shared queue and the "tail drop" discipline, this can result in unnecessary global synchronization of flows, resulting in lowered average link utilization and, hence, lowered network throughput. Second, unnecessary packet drops represent a waste of network capacity on the path before the drop point.

此外,如果没有AQM,当队列溢出时会丢弃更多的数据包。这是不可取的,原因有几个。首先,对于共享队列和“尾部丢弃”规则,这可能导致不必要的流全局同步,从而降低平均链路利用率,从而降低网络吞吐量。其次,不必要的数据包丢弃表示丢弃点之前路径上的网络容量浪费。

While AQM can manage queue lengths and reduce end-to-end latency even in the absence of end-to-end congestion control, it will be able to reduce packet drops only in an environment that continues to be dominated by end-to-end congestion control.

虽然AQM可以管理队列长度并减少端到端延迟,即使在没有端到端拥塞控制的情况下,但它只能在端到端拥塞控制仍然占主导地位的环境中减少数据包丢失。

2. Provide a lower-delay interactive service

2. 提供较低延迟的交互式服务

By keeping a small average queue size, AQM will reduce the delays experienced by flows. This is particularly important for interactive applications such as short web transfers, POP/IMAP, DNS, terminal traffic (Telnet, SSH, Mosh, RDP, etc.), gaming or interactive audio-video sessions, whose subjective (and objective) performance is better when the end-to-end delay is low.

通过保持较小的平均队列大小,AQM将减少流所经历的延迟。这对于交互式应用程序尤其重要,例如短web传输、POP/IMAP、DNS、终端流量(Telnet、SSH、Mosh、RDP等)、游戏或交互式音频视频会话,当端到端延迟较低时,其主观(和客观)性能更好。

3. Avoid lock-out behavior

3. 避免锁定行为

AQM can prevent lock-out behavior by ensuring that there will almost always be a buffer available for an incoming packet. For the same reason, AQM can prevent a bias against low-capacity, but highly bursty, flows.

AQM可以通过确保传入数据包几乎总是有可用的缓冲区来防止锁定行为。出于同样的原因,AQM可以防止对低容量但高度突发的流的偏见。

Lock-out is undesirable because it constitutes a gross unfairness among groups of flows. However, we stop short of calling this benefit "increased fairness", because general fairness among flows requires per-flow state, which is not provided by queue management. For example, in a network device using AQM with only

锁定是不可取的,因为它在流组之间构成了严重的不公平。然而,我们并没有将这种好处称为“增加的公平性”,因为流之间的一般公平性需要每个流状态,而队列管理并没有提供这种状态。例如,在使用AQM的网络设备中,只有

FIFO scheduling, two TCP flows may receive very different shares of the network capacity simply because they have different RTTs [Floyd91], and a flow that does not use congestion control may receive more capacity than a flow that does. AQM can therefore be combined with a scheduling mechanism that divides network traffic between multiple queues (Section 2.1).

FIFO调度,两个TCP流可能会收到非常不同的网络容量份额,这仅仅是因为它们具有不同的RTT[Floyd91],并且不使用拥塞控制的流可能会收到比使用拥塞控制的流更多的容量。因此,AQM可以与在多个队列之间划分网络流量的调度机制相结合(第2.1节)。

4. Reduce the probability of control loop synchronization

4. 降低控制回路同步的概率

The probability of network control loop synchronization can be reduced if network devices introduce randomness in the AQM functions that trigger congestion avoidance at the sending host.

如果网络设备在触发发送主机处拥塞避免的AQM功能中引入随机性,则可以降低网络控制环路同步的概率。

2.1. AQM and Multiple Queues
2.1. AQM与多队列

A network device may use per-flow or per-class queueing with a scheduling algorithm to either prioritize certain applications or classes of traffic, limit the rate of transmission, or provide isolation between different traffic flows within a common class. For example, a router may maintain per-flow state to achieve general fairness by a per-flow scheduling algorithm such as various forms of Fair Queueing (FQ) [Dem90] [Sut99], including Weighted Fair Queueing (WFQ), Stochastic Fairness Queueing (SFQ) [McK90], Deficit Round Robin (DRR) [Shr96] [Nic12], and/or a Class-Based Queue scheduling algorithm such as CBQ [Floyd95]. Hierarchical queues may also be used, e.g., as a part of a Hierarchical Token Bucket (HTB) or Hierarchical Fair Service Curve (HFSC) [Sto97]. These methods are also used to realize a range of Quality of Service (QoS) behaviors designed to meet the need of traffic classes (e.g., using the integrated or differentiated service models).

网络设备可以使用具有调度算法的每流或每类排队来对某些应用程序或业务类别进行优先级排序、限制传输速率或在公共类别内的不同业务流之间提供隔离。例如,路由器可以通过诸如各种形式的公平排队(FQ)[Dem90][Sut99]的每流调度算法来维持每流状态以实现一般公平性,包括加权公平排队(WFQ)、随机公平排队(SFQ)[McK90]、赤字循环(DRR)[Shr96][Nic12],和/或基于类的队列调度算法,如CBQ[Floyd95]。也可以使用分层队列,例如,作为分层令牌桶(HTB)或分层公平服务曲线(HFSC)的一部分[Sto97]。这些方法还用于实现一系列服务质量(QoS)行为,这些行为旨在满足流量类别的需求(例如,使用集成或差异化服务模型)。

AQM is needed even for network devices that use per-flow or per-class queueing, because scheduling algorithms by themselves do not control the overall queue size or the sizes of individual queues. AQM mechanisms might need to control the overall queue sizes to ensure that arriving bursts can be accommodated without dropping packets. AQM should also be used to control the queue size for each individual flow or class, so that they do not experience unnecessarily high delay. Using a combination of AQM and scheduling between multiple queues has been shown to offer good results in experimental use and some types of operational use.

甚至对于使用每流或每类排队的网络设备,也需要AQM,因为调度算法本身并不控制总体队列大小或单个队列的大小。AQM机制可能需要控制总体队列大小,以确保能够在不丢弃数据包的情况下容纳到达的突发。AQM还应用于控制每个流或类的队列大小,以便它们不会经历不必要的高延迟。在多个队列之间使用AQM和调度的组合已被证明在实验使用和某些类型的操作使用中提供了良好的结果。

In short, scheduling algorithms and queue management should be seen as complementary, not as replacements for each other.

简而言之,调度算法和队列管理应该被看作是互补的,而不是相互替代的。

2.2. AQM and Explicit Congestion Marking (ECN)
2.2. AQM和显式拥塞标记(ECN)

An AQM method may use Explicit Congestion Notification (ECN) [RFC3168] instead of dropping to mark packets under mild or moderate congestion. ECN-marking can allow a network device to signal congestion at a point before a transport experiences congestion loss or additional queueing delay [ECN-Benefit]. Section 4.2.1 describes some of the benefits of using ECN with AQM.

AQM方法可以使用显式拥塞通知(ECN)[RFC3168]代替丢弃来标记轻度或中度拥塞下的分组。ECN标记可允许网络设备在传输发生拥塞丢失或额外排队延迟之前的某一点发出拥塞信号[ECN效益]。第4.2.1节描述了将ECN与AQM结合使用的一些好处。

2.3. AQM and Buffer Size
2.3. AQM和缓冲区大小

It is important to differentiate the choice of buffer size for a queue in a switch/router or other network device, and the threshold(s) and other parameters that determine how and when an AQM algorithm operates. The optimum buffer size is a function of operational requirements and should generally be sized to be sufficient to buffer the largest normal traffic burst that is expected. This size depends on the amount and burstiness of traffic arriving at the queue and the rate at which traffic leaves the queue.

区分交换机/路由器或其他网络设备中队列的缓冲区大小选择以及决定AQM算法如何和何时运行的阈值和其他参数非常重要。最佳缓冲区大小是操作要求的函数,通常应大小足以缓冲预期的最大正常流量突发。此大小取决于到达队列的流量的数量和突发性以及流量离开队列的速率。

One objective of AQM is to minimize the effect of lock-out, where one flow prevents other flows from effectively gaining capacity. This need can be illustrated by a simple example of drop-tail queueing when a new TCP flow injects packets into a queue that happens to be almost full. A TCP flow's congestion control algorithm [RFC5681] increases the flow rate to maximize its effective window. This builds a queue in the network, inducing latency in the flow and other flows that share this queue. Once a drop-tail queue fills, there will also be loss. A new flow, sending its initial burst, has an enhanced probability of filling the remaining queue and dropping packets. As a result, the new flow can be prevented from effectively sharing the queue for a period of many RTTs. In contrast, AQM can minimize the mean queue depth and therefore reduce the probability that competing sessions can materially prevent each other from performing well.

AQM的一个目标是最小化锁定的影响,其中一个流阻止其他流有效获取容量。当一个新的TCP流将数据包注入一个几乎满的队列时,这种需要可以通过一个简单的丢尾排队示例来说明。TCP流的拥塞控制算法[RFC5681]增加了流量,以最大化其有效窗口。这将在网络中构建一个队列,导致流和共享此队列的其他流出现延迟。一旦落尾队列填满,也会丢失。发送初始突发的新流填充剩余队列并丢弃数据包的概率增加。因此,可以防止新流在多个RTT期间有效地共享队列。相比之下,AQM可以最小化平均队列深度,从而降低竞争会话实质上妨碍彼此良好运行的概率。

AQM frees a designer from having to limit the buffer space assigned to a queue to achieve acceptable performance, allowing allocation of sufficient buffering to satisfy the needs of the particular traffic pattern. Different types of traffic and deployment scenarios will lead to different requirements. The choice of AQM algorithm and associated parameters is therefore a function of the way in which congestion is experienced and the required reaction to achieve acceptable performance. The latter is the primary topic of the following sections.

AQM使设计人员无需限制分配给队列的缓冲区空间以获得可接受的性能,允许分配足够的缓冲以满足特定流量模式的需要。不同类型的流量和部署场景将导致不同的需求。因此,AQM算法和相关参数的选择取决于经历拥塞的方式以及实现可接受性能所需的反应。后者是以下各节的主要主题。

3. Managing Aggressive Flows
3. 管理积极的流动

One of the keys to the success of the Internet has been the congestion avoidance mechanisms of TCP. Because TCP "backs off" during congestion, a large number of TCP connections can share a single, congested link in such a way that link bandwidth is shared reasonably equitably among similarly situated flows. The equitable sharing of bandwidth among flows depends on all flows running compatible congestion avoidance algorithms, i.e., methods conformant with the current TCP specification [RFC5681].

互联网成功的关键之一是TCP的拥塞避免机制。由于TCP在拥塞期间“后退”,因此大量TCP连接可以共享单个拥塞链路,从而在类似位置的流之间合理公平地共享链路带宽。流之间带宽的公平共享取决于所有运行兼容拥塞避免算法的流,即符合当前TCP规范的方法[RFC5681]。

In this document, a flow is known as "TCP-friendly" when it has a congestion response that approximates the average response expected of a TCP flow. One example method of a TCP-friendly scheme is the TCP-Friendly Rate Control algorithm [RFC5348]. In this document, the term is used more generally to describe this and other algorithms that meet these goals.

在本文档中,如果流的拥塞响应近似于TCP流预期的平均响应,则称为“TCP友好”。TCP友好方案的一个示例方法是TCP友好速率控制算法[RFC5348]。在本文档中,该术语更一般地用于描述满足这些目标的本算法和其他算法。

There are a variety of types of network flow. Some convenient classes that describe flows are: (1) TCP-friendly flows, (2) unresponsive flows, i.e., flows that do not slow down when congestion occurs, and (3) flows that are responsive but are less responsive to congestion than TCP. The last two classes contain more aggressive flows that can pose significant threats to Internet performance.

网络流有多种类型。描述流的一些方便类有:(1)TCP友好流,(2)无响应流,即发生拥塞时不会减慢的流,以及(3)响应性强但对拥塞的响应不如TCP的流。最后两个类包含更具攻击性的流,可能对Internet性能造成重大威胁。

1. TCP-friendly flows

1. TCP友好流

A TCP-friendly flow responds to congestion notification within a small number of path RTTs, and in steady-state it uses no more capacity than a conformant TCP running under comparable conditions (drop rate, RTT, packet size, etc.). This is described in the remainder of the document.

TCP友好流在少量路径RTT内响应拥塞通知,并且在稳定状态下,它使用的容量不超过在类似条件下运行的一致TCP(丢弃率、RTT、数据包大小等)。本文件其余部分对此进行了说明。

2. Non-responsive flows

2. 非响应流

A non-responsive flow does not adjust its rate in response to congestion notification within a small number of path RTTs; it can also use more capacity than a conformant TCP running under comparable conditions. There is a growing set of applications whose congestion avoidance algorithms are inadequate or nonexistent (i.e., a flow that does not throttle its sending rate when it experiences congestion).

非响应流不响应少量路径RTT内的拥塞通知而调整其速率;它还可以使用比在类似条件下运行的一致TCP更多的容量。越来越多的应用程序的拥塞避免算法不足或不存在(即,当遇到拥塞时,流不会限制其发送速率)。

The User Datagram Protocol (UDP) [RFC768] provides a minimal, best-effort transport to applications and upper-layer protocols (both simply called "applications" in the remainder of this document) and does not itself provide mechanisms to prevent congestion collapse or establish a degree of fairness [RFC5405].

用户数据报协议(UDP)[RFC768]向应用程序和上层协议(在本文档的其余部分中均简称为“应用程序”)提供最小、最大努力的传输,但其本身不提供防止拥塞崩溃或建立一定程度公平性的机制[RFC5405]。

Examples that use UDP include some streaming applications for packet voice and video, and some multicast bulk data transport. Other traffic, when aggregated, may also become unresponsive to congestion notification. If no action is taken, such unresponsive flows could lead to a new congestion collapse [RFC2914]. Some applications can even increase their traffic volume in response to congestion (e.g., by adding Forward Error Correction when loss is experienced), with the possibility that they contribute to congestion collapse.

使用UDP的示例包括一些用于分组语音和视频的流应用程序,以及一些多播批量数据传输。其他流量在聚合时也可能对拥塞通知不响应。如果不采取任何措施,这种无响应流可能导致新的拥塞崩溃[RFC2914]。一些应用程序甚至可以增加其流量以响应拥塞(例如,通过在发生丢失时添加前向纠错),它们可能导致拥塞崩溃。

In general, applications need to incorporate effective congestion avoidance mechanisms [RFC5405]. Research continues to be needed to identify and develop ways to accomplish congestion avoidance for presently unresponsive applications. Network devices need to be able to protect themselves against unresponsive flows, and mechanisms to accomplish this must be developed and deployed. Deployment of such mechanisms would provide an incentive for all applications to become responsive by either using a congestion-controlled transport (e.g., TCP, SCTP [RFC4960], and DCCP [RFC4340]) or incorporating their own congestion control in the application [RFC5405] [RFC6679].

一般来说,应用程序需要结合有效的拥塞避免机制[RFC5405]。仍然需要进行研究,以确定和开发为目前无响应的应用程序实现拥塞避免的方法。网络设备需要能够保护自己不受无响应流的影响,必须开发和部署实现这一点的机制。这种机制的部署将通过使用拥塞控制传输(例如TCP、SCTP[RFC4960]和DCCP[RFC4340])或在应用程序[RFC5405][RFC6679]中加入它们自己的拥塞控制来激励所有应用程序做出响应。

3. Transport flows that are less responsive than TCP

3. 比TCP响应性差的传输流

A second threat is posed by transport protocol implementations that are responsive to congestion, but, either deliberately or through faulty implementation, reduce the effective window less than a TCP flow would have done in response to congestion. This covers a spectrum of behaviors between (1) and (2). If applications are not sufficiently responsive to congestion signals, they may gain an unfair share of the available network capacity.

第二个威胁来自于对拥塞做出响应的传输协议实现,但是,无论是有意还是通过错误的实现,有效窗口的减少程度都小于TCP流对拥塞做出响应的程度。这涵盖了(1)和(2)之间的一系列行为。如果应用程序对拥塞信号响应不够,它们可能会获得不公平的可用网络容量份额。

For example, the popularity of the Internet has caused a proliferation in the number of TCP implementations. Some of these may fail to implement the TCP congestion avoidance mechanisms correctly because of poor implementation. Others may deliberately be implemented with congestion avoidance algorithms that are more aggressive in their use of capacity than other TCP implementations; this would allow a vendor to claim to have a "faster TCP". The logical consequence of such implementations would be a spiral of increasingly aggressive TCP implementations, leading back to the point where there is effectively no congestion avoidance and the Internet is chronically congested.

例如,互联网的普及导致了TCP实现数量的激增。其中一些可能无法正确实现TCP拥塞避免机制,因为实现较差。其他的可能会故意使用拥塞避免算法来实现,这些算法在容量使用方面比其他TCP实现更具攻击性;这将允许供应商声称拥有“更快的TCP”。这种实现的逻辑结果将是一个日益激进的TCP实现的螺旋,导致回到没有有效的拥塞避免和互联网长期拥塞的地步。

Another example could be an RTP/UDP video flow that uses an adaptive codec, but responds incompletely to indications of congestion or responds over an excessively long time period.

另一个例子是RTP/UDP视频流,它使用自适应编解码器,但不完全响应拥塞指示或响应时间过长。

Such flows are unlikely to be responsive to congestion signals in a time frame comparable to a small number of end-to-end transmission delays. However, over a longer timescale, perhaps seconds in duration, they could moderate their speed, or increase their speed if they determine capacity to be available.

这种流量不太可能在相当于少量端到端传输延迟的时间范围内响应拥塞信号。然而,在更长的时间范围内,可能是几秒钟的持续时间,他们可以降低速度,或者在确定可用容量的情况下提高速度。

Tunneled traffic aggregates carrying multiple (short) TCP flows can be more aggressive than standard bulk TCP. Applications (e.g., web browsers primarily supporting HTTP 1.1 and peer-to-peer file-sharing) have exploited this by opening multiple connections to the same endpoint.

承载多个(短)TCP流的隧道式流量聚合可能比标准的批量TCP更具攻击性。应用程序(例如,主要支持HTTP 1.1和对等文件共享的web浏览器)通过打开到同一端点的多个连接来利用这一点。

Lastly, some applications (e.g., web browsers primarily supporting HTTP 1.1) open a large numbers of successive short TCP flows for a single session. This can lead to each individual flow spending the majority of time in the exponential TCP slow start phase, rather than in TCP congestion avoidance. The resulting traffic aggregate can therefore be much less responsive than a single standard TCP flow.

最后,一些应用程序(例如,主要支持HTTP 1.1的web浏览器)为单个会话打开大量连续的短TCP流。这可能导致每个流的大部分时间都花费在指数TCP慢启动阶段,而不是TCP拥塞避免阶段。因此,产生的流量聚合可能比单个标准TCP流的响应性差得多。

The projected increase in the fraction of total Internet traffic for more aggressive flows in classes 2 and 3 could pose a threat to the performance of the future Internet. There is therefore an urgent need for measurements of current conditions and for further research into the ways of managing such flows. This raises many difficult issues in finding methods with an acceptable overhead cost that can identify and isolate unresponsive flows or flows that are less responsive than TCP. Finally, there is as yet little measurement or simulation evidence available about the rate at which these threats are likely to be realized or about the expected benefit of algorithms for managing such flows.

第2类和第3类流量中更具攻击性的流量在互联网总流量中所占比例的预计增加可能对未来互联网的性能构成威胁。因此,迫切需要对目前的情况进行测量,并进一步研究管理这种流动的方法。这在寻找具有可接受开销成本的方法时提出了许多困难的问题,这些方法可以识别和隔离无响应流或响应性不如TCP的流。最后,关于这些威胁可能被实现的速度或管理这些流量的算法的预期效益,目前还没有多少测量或模拟证据。

Another topic requiring consideration is the appropriate granularity of a "flow" when considering a queue management method. There are a few "natural" answers: 1) a transport (e.g., TCP or UDP) flow (source address/port, destination address/port, protocol); 2) Differentiated Services Code Point, DSCP; 3) a source/destination host pair (IP address); 4) a given source host or a given destination host, or various combinations of the above; 5) a subscriber or site receiving the Internet service (enterprise or residential).

另一个需要考虑的主题是在考虑队列管理方法时“流”的适当粒度。有几个“自然”答案:1)传输(例如TCP或UDP)流(源地址/端口、目标地址/端口、协议);2) 区分服务码点;3) 源/目的主机对(IP地址);4) 给定源主机或给定目标主机,或上述主机的各种组合;5) 接收互联网服务的用户或站点(企业或住宅)。

The source/destination host pair gives an appropriate granularity in many circumstances. However, different vendors/providers use different granularities for defining a flow (as a way of "distinguishing" themselves from one another), and different granularities may be chosen for different places in the network. It may be the case that the granularity is less important than the fact that a network device needs to be able to deal with more unresponsive

在许多情况下,源/目标主机对提供了适当的粒度。然而,不同的供应商/提供者使用不同的粒度来定义流(作为彼此“区分”的方式),并且可以为网络中的不同位置选择不同的粒度。在这种情况下,粒度可能不如网络设备需要能够处理更多无响应的数据这一事实重要

flows at *some* granularity. The granularity of flows for congestion management is, at least in part, a question of policy that needs to be addressed in the wider IETF community.

以*某些*粒度流动。拥塞管理流的粒度至少在一定程度上是一个需要在更广泛的IETF社区中解决的政策问题。

4. Conclusions and Recommendations
4. 结论和建议

The IRTF, in producing [RFC2309], and the IETF in subsequent discussion, have developed a set of specific recommendations regarding the implementation and operational use of AQM procedures. The recommendations provided by this document are summarized as:

IRTF在编制[RFC2309]时,以及IETF在随后的讨论中,就AQM程序的实施和操作使用制定了一套具体建议。本文件提供的建议总结如下:

1. Network devices SHOULD implement some AQM mechanism to manage queue lengths, reduce end-to-end latency, and avoid lock-out phenomena within the Internet.

1. 网络设备应该实现一些AQM机制来管理队列长度,减少端到端延迟,并避免互联网中的锁定现象。

2. Deployed AQM algorithms SHOULD support Explicit Congestion Notification (ECN) as well as loss to signal congestion to endpoints.

2. 部署的AQM算法应支持显式拥塞通知(ECN)以及端点信号丢失拥塞。

3. AQM algorithms SHOULD NOT require tuning of initial or configuration parameters in common use cases.

3. AQM算法不应要求在常见用例中调整初始或配置参数。

4. AQM algorithms SHOULD respond to measured congestion, not application profiles.

4. AQM算法应该响应测量的拥塞,而不是应用程序配置文件。

5. AQM algorithms SHOULD NOT interpret specific transport protocol behaviors.

5. AQM算法不应解释特定的传输协议行为。

6. Congestion control algorithms for transport protocols SHOULD maximize their use of available capacity (when there is data to send) without incurring undue loss or undue round-trip delay.

6. 传输协议的拥塞控制算法应最大限度地利用可用容量(当有数据要发送时),而不会造成不适当的损失或不适当的往返延迟。

7. Research, engineering, and measurement efforts are needed regarding the design of mechanisms to deal with flows that are unresponsive to congestion notification or are responsive, but are more aggressive than present TCP.

7. 需要进行研究、工程和测量工作,以设计机制来处理对拥塞通知无响应或有响应但比现有TCP更具攻击性的流。

These recommendations are expressed using the word "SHOULD". This is in recognition that there may be use cases that have not been envisaged in this document in which the recommendation does not apply. Therefore, care should be taken in concluding that one's use case falls in that category; during the life of the Internet, such use cases have been rarely, if ever, observed and reported. To the contrary, available research [Choi04] says that even high-speed links in network cores that are normally very stable in depth and behavior experience occasional issues that need moderation. The recommendations are detailed in the following sections.

这些建议是用“应该”一词表达的。这是因为认识到可能存在本文件中未设想的、建议不适用的用例。因此,在得出一个人的用例属于该类别的结论时,应该谨慎;在互联网的生命周期中,此类用例很少被观察和报告。相反,现有研究[Choi04]指出,即使是网络核心中通常深度非常稳定且行为非常稳定的高速链路,也会偶尔遇到需要缓和的问题。以下各节详细介绍了这些建议。

4.1. Operational Deployments SHOULD Use AQM Procedures
4.1. 运营部署应使用AQM程序

AQM procedures are designed to minimize the delay and buffer exhaustion induced in the network by queues that have filled as a result of host behavior. Marking and loss behaviors provide a signal that buffers within network devices are becoming unnecessarily full and that the sender would do well to moderate its behavior.

AQM过程旨在最大限度地减少由于主机行为而导致的队列填充在网络中引起的延迟和缓冲区耗尽。标记和丢失行为提供了一个信号,表明网络设备中的缓冲区正在变得不必要地满,发送方可以很好地调节其行为。

The use of scheduling mechanisms, such as priority queueing, classful queueing, and fair queueing, is often effective in networks to help a network serve the needs of a range of applications. Network operators can use these methods to manage traffic passing a choke point. This is discussed in [RFC2474] and [RFC2475]. When scheduling is used, AQM should be applied across the classes or flows as well as within each class or flow:

在网络中,使用调度机制(如优先级排队、类排队和公平排队)通常可以有效地帮助网络满足一系列应用的需要。网络运营商可以使用这些方法来管理通过瓶颈的流量。[RFC2474]和[RFC2475]对此进行了讨论。使用调度时,AQM应跨类或流以及每个类或流应用:

o AQM mechanisms need to control the overall queue sizes to ensure that arriving bursts can be accommodated without dropping packets.

o AQM机制需要控制总体队列大小,以确保能够在不丢弃数据包的情况下容纳到达的突发。

o AQM mechanisms need to allow combination with other mechanisms, such as scheduling, to allow implementation of policies for providing fairness between different flows.

o AQM机制需要允许与其他机制(如调度)相结合,以允许实现在不同流之间提供公平性的策略。

o AQM should be used to control the queue size for each individual flow or class, so that they do not experience unnecessarily high delay.

o AQM应用于控制每个流或类的队列大小,以便它们不会经历不必要的高延迟。

4.2. Signaling to the Transport Endpoints
4.2. 发送到传输端点的信令

There are a number of ways a network device may signal to the endpoint that the network is becoming congested and trigger a reduction in rate. The signaling methods include:

网络设备可以通过多种方式向端点发出信号,表明网络正在变得拥挤并触发速率降低。信令方法包括:

o Delaying transport segments (packets) in flight, such as in a queue.

o 在飞行中延迟传输段(数据包),如在队列中。

o Dropping transport segments (packets) in transit.

o 正在传输中丢弃传输段(数据包)。

o Marking transport segments (packets), such as using Explicit Congestion Control [RFC3168] [RFC4301] [RFC4774] [RFC6040] [RFC6679].

o 标记传输段(数据包),例如使用显式拥塞控制[RFC3168][RFC4301][RFC4774][RFC6040][RFC6679]。

Increased network latency is used as an implicit signal of congestion. For example, in TCP, additional delay can affect ACK clocking and has the result of reducing the rate of transmission of new data. In the Real-time Transport Protocol (RTP), network latency impacts the RTCP-reported RTT, and increased latency can trigger a sender to adjust its rate. Methods such as Low Extra Delay

增加的网络延迟被用作拥塞的隐含信号。例如,在TCP中,额外的延迟会影响ACK时钟,并导致新数据的传输速率降低。在实时传输协议(RTP)中,网络延迟会影响RTCP报告的RTT,并且延迟的增加会触发发送方调整其速率。低额外延迟等方法

Background Transport (LEDBAT) [RFC6817] assume increased latency as a primary signal of congestion. Appropriate use of delay-based methods and the implications of AQM presently remain an area for further research.

背景传输(LEDBAT)[RFC6817]假设延迟增加是拥塞的主要信号。适当使用基于延迟的方法和AQM的含义目前仍然是一个有待进一步研究的领域。

It is essential that all Internet hosts respond to loss [RFC5681] [RFC5405] [RFC4960] [RFC4340]. Packet dropping by network devices that are under load has two effects: It protects the network, which is the primary reason that network devices drop packets. The detection of loss also provides a signal to a reliable transport (e.g., TCP, SCTP) that there is incipient congestion, using a pragmatic but ambiguous heuristic. Whereas, when the network discards a message in flight, the loss may imply the presence of faulty equipment or media in a path, or it may imply the presence of congestion. To be conservative, a transport must assume it may be the latter. Applications using unreliable transports (e.g., using UDP) need to similarly react to loss [RFC5405].

所有Internet主机都必须对丢失做出响应[RFC5681][RFC5405][RFC4960][RFC4340]。负载下的网络设备丢弃数据包有两种效果:保护网络,这是网络设备丢弃数据包的主要原因。丢失检测还使用实用但不明确的启发式方法向可靠传输(例如TCP、SCTP)提供信号,表明存在初期拥塞。然而,当网络丢弃飞行中的消息时,丢失可能意味着路径中存在故障设备或媒体,或者可能意味着存在拥塞。为了保守起见,交通工具必须假设它可能是后者。使用不可靠传输(例如,使用UDP)的应用程序需要对丢失做出类似的反应[RFC5405]。

Network devices SHOULD use an AQM algorithm to measure local congestion and to determine the packets to mark or drop so that the congestion is managed.

网络设备应使用AQM算法来测量本地拥塞,并确定要标记或丢弃的数据包,以便对拥塞进行管理。

In general, dropping multiple packets from the same sessions in the same RTT is ineffective and can reduce throughput. Also, dropping or marking packets from multiple sessions simultaneously can have the effect of synchronizing them, resulting in increasing peaks and troughs in the subsequent traffic load. Hence, AQM algorithms SHOULD randomize dropping in time, to reduce the probability that congestion indications are only experienced by a small proportion of the active flows.

通常,在同一RTT中从同一会话丢弃多个数据包是无效的,并且会降低吞吐量。此外,同时从多个会话丢弃或标记数据包可能具有使其同步的效果,从而导致后续流量负载中的高峰和低谷增加。因此,AQM算法应随机化时间下降,以降低只有一小部分活动流经历拥塞指示的概率。

Loss due to dropping also has an effect on the efficiency of a flow and can significantly impact some classes of application. In reliable transports, the dropped data must be subsequently retransmitted. While other applications/transports may adapt to the absence of lost data, this still implies inefficient use of available capacity, and the dropped traffic can affect other flows. Hence, congestion signaling by loss is not entirely positive; it is a necessary evil.

由于跌落而导致的损失也会影响流的效率,并会显著影响某些应用程序类别。在可靠传输中,丢弃的数据随后必须重新传输。虽然其他应用程序/传输可以适应数据丢失的情况,但这仍然意味着可用容量的使用效率低下,并且丢弃的流量可能会影响其他流。因此,由丢失引起的拥塞信令不是完全正的;这是一种必然的罪恶。

4.2.1. AQM and ECN
4.2.1. AQM和ECN

Explicit Congestion Notification (ECN) [RFC4301] [RFC4774] [RFC6040] [RFC6679] is a network-layer function that allows a transport to receive network congestion information from a network device without incurring the unintended consequences of loss. ECN includes both

显式拥塞通知(ECN)[RFC4301][RFC4774][RFC6040][RFC6679]是一种网络层功能,它允许传输从网络设备接收网络拥塞信息,而不会导致意外的丢失后果。ECN包括两者

transport mechanisms and functions implemented in network devices; the latter rely upon using AQM to decide when and whether to ECN-mark.

在网络设备中实现的传输机制和功能;后者依赖于使用AQM来决定何时以及是否进行ECN标记。

Congestion for ECN-capable transports is signaled by a network device setting the "Congestion Experienced (CE)" codepoint in the IP header. This codepoint is noted by the remote receiving endpoint and signaled back to the sender using a transport protocol mechanism, allowing the sender to trigger timely congestion control. The decision to set the CE codepoint requires an AQM algorithm configured with a threshold. Non-ECN capable flows (the default) are dropped under congestion.

支持ECN的传输的拥塞由网络设备在IP报头中设置“拥塞经历(CE)”代码点发出信号。远程接收端点记录该码点,并使用传输协议机制将其发回发送方,从而允许发送方触发及时的拥塞控制。决定设置CE码点需要配置有阈值的AQM算法。不支持ECN的流(默认)在拥塞情况下丢弃。

Network devices SHOULD use an AQM algorithm that marks ECN-capable traffic when making decisions about the response to congestion. Network devices need to implement this method by marking ECN-capable traffic or by dropping non-ECN-capable traffic.

网络设备应使用AQM算法,该算法在做出拥塞响应决策时标记支持ECN的流量。网络设备需要通过标记支持ECN的流量或丢弃不支持ECN的流量来实现此方法。

Safe deployment of ECN requires that network devices drop excessive traffic, even when marked as originating from an ECN-capable transport. This is a necessary safety precaution because:

ECN的安全部署要求网络设备丢弃过多的流量,即使标记为来自支持ECN的传输。这是必要的安全预防措施,因为:

1. A non-conformant, broken, or malicious receiver could conceal an ECN mark and not report this to the sender;

1. 不符合、损坏或恶意的接收者可能会隐藏ECN标记,而不会向发送者报告;

2. A non-conformant, broken, or malicious sender could ignore a reported ECN mark, as it could ignore a loss without using ECN;

2. 不符合、损坏或恶意的发送者可以忽略报告的ECN标记,因为它可以忽略丢失而不使用ECN;

3. A malfunctioning or non-conforming network device may "hide" an ECN mark (or fail to correctly set the ECN codepoint at an egress of a network tunnel).

3. 出现故障或不符合要求的网络设备可能“隐藏”ECN标记(或无法正确设置网络隧道出口处的ECN代码点)。

In normal operation, such cases should be very uncommon; however, overload protection is desirable to protect traffic from misconfigured or malicious use of ECN (e.g., a denial-of-service attack that generates ECN-capable traffic that is unresponsive to CE-marking).

在正常操作中,这种情况应该非常罕见;但是,需要过载保护来保护通信量不受ECN配置错误或恶意使用的影响(例如,拒绝服务攻击,该攻击生成对CE标记无响应的支持ECN的通信量)。

When ECN is added to a scheme, the ECN support MAY define a separate set of parameters from those used for controlling packet drop. The AQM algorithm SHOULD still auto-tune these ECN-specific parameters. These parameters SHOULD also be manually configurable.

当ECN被添加到方案中时,ECN支持可以定义一组与用于控制分组丢弃的参数不同的参数。AQM算法仍应自动调整这些ECN特定参数。这些参数也应手动配置。

Network devices SHOULD use an algorithm to drop excessive traffic (e.g., at some level above the threshold for CE-marking), even when the packets are marked as originating from an ECN-capable transport.

网络设备应使用算法丢弃过多的通信量(例如,在高于CE标记阈值的某个级别),即使分组被标记为来自支持ECN的传输。

4.3. AQM Algorithm Deployment SHOULD NOT Require Operational Tuning
4.3. AQM算法部署不应要求操作调整

A number of AQM algorithms have been proposed. Many require some form of tuning or setting of parameters for initial network conditions. This can make these algorithms difficult to use in operational networks.

已经提出了许多AQM算法。许多需要对初始网络条件进行某种形式的参数调整或设置。这会使这些算法难以在运营网络中使用。

AQM algorithms need to consider both "initial conditions" and "operational conditions". The former includes values that exist before any experience is gathered about the use of the algorithm, such as the configured speed of interface, support for full-duplex communication, interface MTU, and other properties of the link. Other properties include information observed from monitoring the size of the queue, the queueing delay experienced, rate of packet discard, etc.

AQM算法需要考虑“初始条件”和“操作条件”。前者包括在收集关于算法使用的任何经验之前存在的值,例如接口的配置速度、对全双工通信的支持、接口MTU以及链路的其他属性。其他属性包括通过监视队列大小、经历的排队延迟、数据包丢弃率等观察到的信息。

This document therefore specifies that AQM algorithms that are proposed for deployment in the Internet have the following properties:

因此,本文件规定,拟在互联网上部署的AQM算法具有以下特性:

o AQM algorithm deployment SHOULD NOT require tuning. An algorithm MUST provide a default behavior that auto-tunes to a reasonable performance for typical network operational conditions. This is expected to ease deployment and operation. Initial conditions, such as the interface rate and MTU size or other values derived from these, MAY be required by an AQM algorithm.

o AQM算法部署不需要调整。算法必须提供默认行为,以便在典型网络操作条件下自动调谐到合理的性能。预计这将简化部署和操作。AQM算法可能需要初始条件,例如接口速率和MTU大小或从中导出的其他值。

o AQM algorithm deployment MAY support further manual tuning that could improve performance in a specific deployed network. Algorithms that lack such variables are acceptable, but, if such variables exist, they SHOULD be externalized (made visible to the operator). The specification should identify any cases in which auto-tuning is unlikely to achieve acceptable performance and give guidance on the parametric adjustments necessary. For example, the expected response of an algorithm may need to be configured to accommodate the largest expected Path RTT, since this value cannot be known at initialization. This guidance is expected to enable the algorithm to be deployed in networks that have specific characteristics (paths with variable or larger delay, networks where capacity is impacted by interactions with lower-layer mechanisms, etc).

o AQM算法部署可支持进一步的手动调整,以提高特定部署网络中的性能。缺少此类变量的算法是可以接受的,但如果存在此类变量,则应将其外部化(使操作员可见)。规范应确定自动调谐不可能达到可接受性能的任何情况,并就必要的参数调整提供指导。例如,可能需要将算法的预期响应配置为适应最大的预期路径RTT,因为在初始化时无法知道该值。本指南预计将使算法能够部署在具有特定特征的网络中(具有可变或更大延迟的路径、容量受到与下层机制交互影响的网络等)。

o AQM algorithm deployment MAY provide logging and alarm signals to assist in identifying if an algorithm using manual or auto-tuning is functioning as expected. (For example, this could be based on an internal consistency check between input, output, and mark/drop rates over time.) This is expected to encourage deployment by default and allow operators to identify potential interactions with other network functions.

o AQM算法部署可提供日志记录和报警信号,以帮助识别使用手动或自动调整的算法是否按预期运行。(例如,这可以基于输入、输出和标记/删除率随时间变化的内部一致性检查。)这将鼓励默认部署,并允许运营商识别与其他网络功能的潜在交互。

Hence, self-tuning algorithms are to be preferred. Algorithms recommended for general Internet deployment by the IETF need to be designed so that they do not require operational (especially manual) configuration or tuning.

因此,优选自校正算法。IETF推荐用于一般互联网部署的算法需要设计为不需要操作(特别是手动)配置或调整。

4.4. AQM Algorithms SHOULD Respond to Measured Congestion, Not Application Profiles

4.4. AQM算法应该响应测量的拥塞,而不是应用程序配置文件

Not all applications transmit packets of the same size. Although applications may be characterized by particular profiles of packet size, this should not be used as the basis for AQM (see Section 4.5). Other methods exist, e.g., Differentiated Services queueing, Pre-Congestion Notification (PCN) [RFC5559], that can be used to differentiate and police classes of application. Network devices may combine AQM with these traffic classification mechanisms and perform AQM only on specific queues within a network device.

并非所有应用程序都传输相同大小的数据包。尽管应用程序的特点可能是数据包大小的特定配置文件,但这不应作为AQM的基础(见第4.5节)。存在其他方法,例如区分服务排队、拥塞前通知(PCN)[RFC5559],可用于区分和管理应用程序的类别。网络设备可以将AQM与这些流量分类机制相结合,并且仅在网络设备内的特定队列上执行AQM。

An AQM algorithm should not deliberately try to prejudice the size of packet that performs best (i.e., preferentially drop/mark based only on packet size). Procedures for selecting packets to drop/mark SHOULD observe the actual or projected time that a packet is in a queue (bytes at a rate being an analog to time). When an AQM algorithm decides whether to drop (or mark) a packet, it is RECOMMENDED that the size of the particular packet not be taken into account [RFC7141].

AQM算法不应故意试图影响性能最佳的数据包大小(即,仅基于数据包大小优先丢弃/标记)。选择要丢弃/标记的数据包的过程应观察数据包在队列中的实际或预计时间(以模拟时间的速率计算的字节)。当AQM算法决定是否丢弃(或标记)数据包时,建议不考虑特定数据包的大小[RFC7141]。

Applications (or transports) generally know the packet size that they are using and can hence make their judgments about whether to use small or large packets based on the data they wish to send and the expected impact on the delay, throughput, or other performance parameter. When a transport or application responds to a dropped or marked packet, the size of the rate reduction should be proportionate to the size of the packet that was sent [RFC7141].

应用程序(或传输程序)通常知道它们正在使用的数据包大小,因此可以根据它们希望发送的数据以及对延迟、吞吐量或其他性能参数的预期影响来判断是否使用小数据包或大数据包。当传输或应用程序响应丢弃或标记的数据包时,速率降低的大小应与发送的数据包的大小成比例[RFC7141]。

An AQM-enabled system MAY instantiate different instances of an AQM algorithm to be applied within the same traffic class. Traffic classes may be differentiated based on an Access Control List (ACL), the packet DSCP [RFC5559], enabling use of the ECN field (i.e., any of ECT(0), ECT(1) or CE) [RFC3168] [RFC4774], a multi-field (MF) classifier that combines the values of a set of protocol fields

启用AQM的系统可以实例化AQM算法的不同实例以应用于同一业务类别中。可以基于访问控制列表(ACL)、分组DSCP[RFC5559]来区分业务类别,从而能够使用ECN字段(即ECT(0)、ECT(1)或CE)[RFC3168][RFC4774]中的任何一个),该多字段(MF)分类器组合了一组协议字段的值

(e.g., IP address, transport, ports), or an equivalent codepoint at a lower layer. This recommendation goes beyond what is defined in RFC 3168 by allowing that an implementation MAY use more than one instance of an AQM algorithm to handle both ECN-capable and non-ECN-capable packets.

(例如,IP地址、传输、端口)或较低层的等效代码点。该建议超出了RFC 3168中的定义,允许实现可以使用AQM算法的多个实例来处理支持ECN和不支持ECN的数据包。

4.5. AQM Algorithms SHOULD NOT Be Dependent on Specific Transport Protocol Behaviors

4.5. AQM算法不应依赖于特定的传输协议行为

In deploying AQM, network devices need to support a range of Internet traffic and SHOULD NOT make implicit assumptions about the characteristics desired by the set of transports/applications the network supports. That is, AQM methods should be opaque to the choice of transport and application.

在部署AQM时,网络设备需要支持一系列互联网流量,并且不应对网络支持的一组传输/应用程序所需的特性进行隐含假设。也就是说,AQM方法对于传输和应用的选择应该是不透明的。

AQM algorithms are often evaluated by considering TCP [RFC793] with a limited number of applications. Although TCP is the predominant transport in the Internet today, this no longer represents a sufficient selection of traffic for verification. There is significant use of UDP [RFC768] in voice and video services, and some applications find utility in SCTP [RFC4960] and DCCP [RFC4340]. Hence, AQM algorithms should demonstrate operation with transports other than TCP and need to consider a variety of applications. When selecting AQM algorithms, the use of tunnel encapsulations that may carry traffic aggregates needs to be considered.

AQM算法通常通过考虑TCP[RFC793]和有限数量的应用来评估。尽管TCP是当今互联网上的主要传输方式,但这不再代表对验证流量的充分选择。UDP[RFC768]在语音和视频服务中有着重要的用途,一些应用程序在SCTP[RFC4960]和DCCP[RFC4340]中发现了实用性。因此,AQM算法应该用TCP以外的传输来演示操作,并且需要考虑各种应用。在选择AQM算法时,需要考虑使用可能承载流量聚合的隧道封装。

AQM algorithms SHOULD NOT target or derive implicit assumptions about the characteristics desired by specific transports/applications. Transports and applications need to respond to the congestion signals provided by AQM (i.e., dropping or ECN-marking) in a timely manner (within a few RTTs at the latest).

AQM算法不应针对特定传输/应用所需的特性,也不应推导出隐含的假设。传输和应用程序需要及时(最迟在几个RTT内)响应AQM提供的拥塞信号(即丢弃或ECN标记)。

4.6. Interactions with Congestion Control Algorithms
4.6. 与拥塞控制算法的交互作用

Applications and transports need to react to received implicit or explicit signals that indicate the presence of congestion. This section identifies issues that can impact the design of transport protocols when using paths that use AQM.

应用程序和传输需要对接收到的指示拥塞存在的隐式或显式信号作出反应。本节确定了在使用使用AQM的路径时可能影响传输协议设计的问题。

Transport protocols and applications need timely signals of congestion. The time taken to detect and respond to congestion is increased when network devices queue packets in buffers. It can be difficult to detect tail losses at a higher layer, and this may sometimes require transport timers or probe packets to detect and respond to such loss. Loss patterns may also impact timely detection, e.g., the time may be reduced when network devices do not drop long runs of packets from the same flow.

传输协议和应用程序需要及时的拥塞信号。当网络设备将数据包排入缓冲区时,检测和响应拥塞所需的时间会增加。在更高层检测尾部丢失可能很困难,这有时可能需要传输计时器或探测数据包来检测和响应此类丢失。丢失模式还可能影响及时检测,例如,当网络设备不从同一流丢弃长时间运行的数据包时,时间可能会缩短。

A common objective of an elastic transport congestion control protocol is to allow an application to deliver the maximum rate of data without inducing excessive delays when packets are queued in buffers within the network. To achieve this, a transport should try to operate at rate below the inflection point of the load/delay curve (the bend of what is sometimes called a "hockey stick" curve) [Jain94]. When the congestion window allows the load to approach this bend, the end-to-end delay starts to rise -- a result of congestion, as packets probabilistically arrive at non-overlapping times. On the one hand, a transport that operates above this point can experience congestion loss and could also trigger operator activities, such as those discussed in [RFC6057]. On the other hand, a flow may achieve both near-maximum throughput and low latency when it operates close to this knee point, with minimal contribution to router congestion. Choice of an appropriate rate/congestion window can therefore significantly impact the loss and delay experienced by a flow and will impact other flows that share a common network queue.

弹性传输拥塞控制协议的一个共同目标是,当数据包在网络中的缓冲区中排队时,允许应用程序在不引起过度延迟的情况下交付最大速率的数据。为了实现这一点,运输工具应尝试以低于负载/延迟曲线拐点(有时称为“曲棍球棒”曲线的弯曲)的速率运行[Jain94]。当拥塞窗口允许负载接近该弯曲时,端到端延迟开始增加——这是拥塞的结果,因为数据包可能在不重叠的时间到达。一方面,在该点以上运行的传输可能会遇到拥塞损失,也可能触发运营商活动,如[RFC6057]中所述。另一方面,当一个流在这个拐点附近运行时,它可以实现接近最大吞吐量和低延迟,对路由器拥塞的贡献最小。因此,选择适当的速率/拥塞窗口可以显著影响流所经历的丢失和延迟,并将影响共享公共网络队列的其他流。

Some applications may send data at a lower rate or keep less segments outstanding at any given time. Examples include multimedia codecs that stream at some natural rate (or set of rates) or an application that is naturally interactive (e.g., some web applications, interactive server-based gaming, transaction-based protocols). Such applications may have different objectives. They may not wish to maximize throughput, but may desire a lower loss rate or bounded delay.

某些应用程序可能以较低的速率发送数据,或在任何给定时间保留较少的未完成段。示例包括以某种自然速率(或一组速率)流式传输的多媒体编解码器或自然交互的应用程序(例如,一些web应用程序、基于交互服务器的游戏、基于事务的协议)。这类应用可能有不同的目标。他们可能不希望最大化吞吐量,但可能希望较低的丢失率或有界延迟。

The correct operation of an AQM-enabled network device MUST NOT rely upon specific transport responses to congestion signals.

启用AQM的网络设备的正确操作不得依赖于对拥塞信号的特定传输响应。

4.7. The Need for Further Research
4.7. 进一步研究的必要性

The second recommendation of [RFC2309] called for further research into the interaction between network queues and host applications, and the means of signaling between them. This research has occurred, and we as a community have learned a lot. However, we are not done.

[RFC2309]的第二项建议要求进一步研究网络队列和主机应用程序之间的交互以及它们之间的信令方式。这项研究已经开展,我们作为一个社区已经学到了很多。然而,我们还没有完成。

We have learned that the problems of congestion, latency, and buffer-sizing have not gone away and are becoming more important to many users. A number of self-tuning AQM algorithms have been found that offer significant advantages for deployed networks. There is also renewed interest in deploying AQM and the potential of ECN.

我们了解到,拥塞、延迟和缓冲区大小的问题并没有消失,并且对许多用户来说变得越来越重要。许多自校正AQM算法被发现为部署的网络提供了显著的优势。人们对部署AQM和ECN的潜力也重新产生了兴趣。

Traffic patterns can depend on the network deployment scenario, and Internet research therefore needs to consider the implications of a diverse range of application interactions. This includes ensuring

流量模式可以依赖于网络部署场景,因此互联网研究需要考虑不同范围的应用交互的影响。这包括确保

that combinations of mechanisms, as well as combinations of traffic patterns, do not interact and result in either significantly reduced flow throughput or significantly increased latency.

机制的组合以及流量模式的组合不会相互影响,导致流量吞吐量显著降低或延迟显著增加。

At the time of writing (in 2015), an obvious example of further research is the need to consider the many-to-one communication patterns found in data centers, known as incast [Ren12], (e.g., produced by Map/Reduce applications). Such analysis needs to study not only each application traffic type but also combinations of types of traffic.

在写作的时候(2015),一个明显的例子是需要考虑在数据中心发现的多对一通信模式,称为CurAST[RUN12],(例如,由MAP/Read应用程序产生)。这种分析不仅需要研究每种应用程序流量类型,还需要研究流量类型的组合。

Research also needs to consider the need to extend our taxonomy of transport sessions to include not only "mice" and "elephants", but "lemmings". Here, "lemmings" are flash crowds of "mice" that the network inadvertently tries to signal to as if they were "elephant" flows, resulting in head-of-line blocking in a data center deployment scenario.

研究还需要考虑扩展我们的运输会议分类,不仅包括“老鼠”和“大象”,还包括“旅鼠”。在这里,“旅鼠”是一群闪现的“老鼠”,网络无意中试图向它们发出信号,就好像它们是“大象”流一样,从而导致数据中心部署场景中的线路阻塞。

Examples of other required research include:

其他所需研究的例子包括:

o new AQM and scheduling algorithms

o 新的AQM和调度算法

o appropriate use of delay-based methods and the implications of AQM

o 适当使用基于延迟的方法和AQM的含义

o suitable algorithms for marking ECN-capable packets that do not require operational configuration or tuning for common use

o 用于标记支持ECN的数据包的合适算法,这些数据包不需要操作配置或调优以供通用

o experience in the deployment of ECN alongside AQM

o 与AQM一起部署ECN的经验

o tools for enabling AQM (and ECN) deployment and measuring the performance

o 用于启用AQM(和ECN)部署和测量性能的工具

o methods for mitigating the impact of non-conformant and malicious flows

o 缓解不一致和恶意流影响的方法

o implications on applications of using new network and transport methods

o 使用新网络和传输方法对应用的影响

Hence, this document reiterates the call of RFC 2309: we need continuing research as applications develop.

因此,本文件重申了RFC2309的要求:随着应用程序的开发,我们需要继续研究。

5. Security Considerations
5. 安全考虑

While security is a very important issue, it is largely orthogonal to the performance issues discussed in this memo.

虽然安全性是一个非常重要的问题,但它在很大程度上与本备忘录中讨论的性能问题是正交的。

This recommendation requires algorithms to be independent of specific transport or application behaviors. Therefore, a network device does not require visibility or access to upper-layer protocol information to implement an AQM algorithm. This ability to operate in an application-agnostic fashion is an example of a privacy-enhancing feature.

本建议要求算法独立于特定的传输或应用程序行为。因此,网络设备不需要可见性或访问上层协议信息来实现AQM算法。这种以应用程序无关的方式操作的能力是隐私增强功能的一个示例。

Many deployed network devices use queueing methods that allow unresponsive traffic to capture network capacity, denying access to other traffic flows. This could potentially be used as a denial-of-service attack. This threat could be reduced in network devices that deploy AQM or some form of scheduling. We note, however, that a denial-of-service attack that results in unresponsive traffic flows may be indistinguishable from other traffic flows (e.g., tunnels carrying aggregates of short flows, high-rate isochronous applications). New methods therefore may remain vulnerable, and this document recommends that ongoing research consider ways to mitigate such attacks.

许多已部署的网络设备使用排队方法,允许无响应流量捕获网络容量,拒绝访问其他流量。这可能被用作拒绝服务攻击。这种威胁可以在部署AQM或某种形式的调度的网络设备中减少。然而,我们注意到,导致无响应交通流的拒绝服务攻击可能与其他交通流(例如,承载短流量聚集的隧道、高速等时应用程序)无法区分。因此,新的方法可能仍然脆弱,并且该文件建议正在进行的研究考虑减轻这种攻击的方法。

6. Privacy Considerations
6. 隐私考虑

This document, by itself, presents no new privacy issues.

本文件本身没有提出新的隐私问题。

7. References
7. 工具书类
7.1. Normative References
7.1. 规范性引用文件

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <http://www.rfc-editor.org/info/rfc2119>.

[RFC2119]Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,DOI 10.17487/RFC2119,1997年3月<http://www.rfc-editor.org/info/rfc2119>.

[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September 2001, <http://www.rfc-editor.org/info/rfc3168>.

[RFC3168]Ramakrishnan,K.,Floyd,S.,和D.Black,“向IP添加显式拥塞通知(ECN)”,RFC 3168,DOI 10.17487/RFC3168,2001年9月<http://www.rfc-editor.org/info/rfc3168>.

[RFC4301] Kent, S. and K. Seo, "Security Architecture for the Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, December 2005, <http://www.rfc-editor.org/info/rfc4301>.

[RFC4301]Kent,S.和K.Seo,“互联网协议的安全架构”,RFC 4301,DOI 10.17487/RFC4301,2005年12月<http://www.rfc-editor.org/info/rfc4301>.

[RFC4774] Floyd, S., "Specifying Alternate Semantics for the Explicit Congestion Notification (ECN) Field", BCP 124, RFC 4774, DOI 10.17487/RFC4774, November 2006, <http://www.rfc-editor.org/info/rfc4774>.

[RFC4774]Floyd,S.,“为显式拥塞通知(ECN)字段指定替代语义”,BCP 124,RFC 4774,DOI 10.17487/RFC4774,2006年11月<http://www.rfc-editor.org/info/rfc4774>.

[RFC5405] Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines for Application Designers", BCP 145, RFC 5405, DOI 10.17487/RFC5405, November 2008, <http://www.rfc-editor.org/info/rfc5405>.

[RFC5405]Eggert,L.和G.Fairhurst,“应用程序设计者的单播UDP使用指南”,BCP 145,RFC 5405,DOI 10.17487/RFC5405,2008年11月<http://www.rfc-editor.org/info/rfc5405>.

[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, <http://www.rfc-editor.org/info/rfc5681>.

[RFC5681]Allman,M.,Paxson,V.和E.Blanton,“TCP拥塞控制”,RFC 5681,DOI 10.17487/RFC56812009年9月<http://www.rfc-editor.org/info/rfc5681>.

[RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion Notification", RFC 6040, DOI 10.17487/RFC6040, November 2010, <http://www.rfc-editor.org/info/rfc6040>.

[RFC6040]Briscoe,B.,“明确拥塞通知的隧道挖掘”,RFC 6040,DOI 10.17487/RFC6040,2010年11月<http://www.rfc-editor.org/info/rfc6040>.

[RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., and K. Carlberg, "Explicit Congestion Notification (ECN) for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 2012, <http://www.rfc-editor.org/info/rfc6679>.

[RFC6679]Westerlund,M.,Johansson,I.,Perkins,C.,O'Hanlon,P.,和K.Carlberg,“UDP上RTP的显式拥塞通知(ECN)”,RFC 6679,DOI 10.17487/RFC66792012年8月<http://www.rfc-editor.org/info/rfc6679>.

[RFC7141] Briscoe, B. and J. Manner, "Byte and Packet Congestion Notification", BCP 41, RFC 7141, DOI 10.17487/RFC7141, February 2014, <http://www.rfc-editor.org/info/rfc7141>.

[RFC7141]Briscoe,B.和J.Way,“字节和数据包拥塞通知”,BCP 41,RFC 7141,DOI 10.17487/RFC7141,2014年2月<http://www.rfc-editor.org/info/rfc7141>.

7.2. Informative References
7.2. 资料性引用

[AQM-WG] IETF, "Active Queue Management and Packet Scheduling (aqm) WG", <http://datatracker.ietf.org/wg/aqm/charter/>.

[AQM-WG]IETF,“主动队列管理和数据包调度(AQM)工作组”<http://datatracker.ietf.org/wg/aqm/charter/>.

[Bri15] Briscoe, B., Brunstrom, A., Petlund, A., Hayes, D., Ros, D., Tsang, I., Gjessing, S., Fairhurst, G., Griwodz, C., and M. Welzl, "Reducing Internet Latency: A Survey of Techniques and their Merit", IEEE Communications Surveys & Tutorials, 2015.

[Bri15]Briscoe,B.,Brunstrom,A.,Petlund,A.,Hayes,D.,Ros,D.,Tsang,I.,Gjessing,S.,Fairhurst,G.,Griwodz,C.,和M.Welzl,“减少互联网延迟:技术及其优点的调查”,IEEE通信调查与教程,2015年。

[Choi04] Choi, B., Moon, S., Zhang, Z., Papagiannaki, K., and C. Diot, "Analysis of Point-To-Point Packet Delay In an Operational Network", March 2004.

[Choi04]Choi,B.,Moon,S.,Zhang,Z.,Papagiannaki,K.,和C.Diot,“运营网络中点对点数据包延迟的分析”,2004年3月。

[CONEX] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) Concepts, Abstract Mechanism and Requirements", Work in Progress, draft-ietf-conex-abstract-mech-13, October 2014.

[CONEX]Mathis,M.和B.Briscoe,“拥堵暴露(CONEX)概念、抽象机制和要求”,正在进行的工作,草稿-ietf-CONEX-Abstract-mech-13,2014年10月。

[Dem90] Demers, A., Keshav, S., and S. Shenker, "Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience", SIGCOMM Symposium proceedings on Communications architectures and protocols, 1990.

[Dem90]Demers,A.,Keshav,S.,和S.Shenker,“公平排队算法的分析和模拟,互联网:研究和经验”,SIGCOM通信体系结构和协议研讨会论文集,1990年。

[ECN-Benefit] Fairhurst, G. and M. Welzl, "The Benefits of using Explicit Congestion Notification (ECN)", Work in Progress, draft-ietf-aqm-ecn-benefits-05, June 2015.

[ECN效益]Fairhurst,G.和M.Welzl,“使用显式拥塞通知(ECN)的效益”,正在进行的工作,草稿-ietf-aqm-ECN-效益-052015年6月。

[Flo92] Floyd, S. and V. Jacobsen, "On Traffic Phase Effects in Packet-Switched Gateways", 1992, <http://www.icir.org/floyd/papers/phase.pdf>.

[Flo92]Floyd,S.和V.Jacobsen,“分组交换网关中的流量相位效应”,1992年<http://www.icir.org/floyd/papers/phase.pdf>.

[Flo94] Floyd, S. and V. Jacobsen, "The Synchronization of Periodic Routing Messages", 1994, <http://ee.lbl.gov/papers/sync_94.pdf>.

[Flo94]Floyd,S.和V.Jacobsen,“定期路由消息的同步”,1994年<http://ee.lbl.gov/papers/sync_94.pdf>.

[Floyd91] Floyd, S., "Connections with Multiple Congested Gateways in Packet-Switched Networks Part 1: One-way Traffic.", Computer Communications Review , October 1991.

[Floyd91]Floyd,S.,“分组交换网络中多个拥塞网关的连接第1部分:单向流量”,《计算机通信评论》,1991年10月。

[Floyd95] Floyd, S. and V. Jacobson, "Link-sharing and Resource Management Models for Packet Networks", IEEE/ACM Transactions on Networking, August 1995.

[Floyd95]Floyd,S.和V.Jacobson,“分组网络的链路共享和资源管理模型”,IEEE/ACM网络事务,1995年8月。

[Jacobson88] Jacobson, V., "Congestion Avoidance and Control", SIGCOMM Symposium proceedings on Communications architectures and protocols, August 1988.

[Jacobson88]Jacobson,V.,“拥塞避免和控制”,SIGCOMM通信体系结构和协议研讨会论文集,1988年8月。

[Jain94] Jain, R., Ramakrishnan, KK., and C. Dah-Ming, "Congestion avoidance scheme for computer networks", US Patent Office 5377327, December 1994.

[Jain94]Jain,R.,Ramakrishnan,KK.,和C.Dah Ming,“计算机网络拥塞避免方案”,美国专利局5377327,1994年12月。

[Lakshman96] Lakshman, TV., Neidhardt, A., and T. Ott, "The Drop From Front Strategy in TCP Over ATM and Its Interworking with Other Control Features", IEEE Infocomm, 1996.

[Lakshman 96]Lakshman,TV.,Neidhardt,A.,和T.Ott,“ATM上TCP的前端下降策略及其与其他控制功能的互通”,IEEE Infocomm,1996年。

[Leland94] Leland, W., Taqqu, M., Willinger, W., and D. Wilson, "On the Self-Similar Nature of Ethernet Traffic (Extended Version)", IEEE/ACM Transactions on Networking, February 1994.

[Leland 94]Leland,W.,Taqqu,M.,Willinger,W.,和D.Wilson,“关于以太网流量的自相似性质(扩展版)”,IEEE/ACM网络事务,1994年2月。

[McK90] McKenney, PE. and G. Varghese, "Stochastic Fairness Queuing", 1990, <http://www2.rdrop.com/~paulmck/scalability/paper/ sfq.2002.06.04.pdf>.

[McKenney,PE。和G.Varghese,“随机公平排队”,1990年<http://www2.rdrop.com/~paulmck/scalability/paper/sfq.2002.06.04.pdf>。

[Nic12] Nichols, K. and V. Jacobson, "Controlling Queue Delay", Communications of the ACM, Vol. 55, Issue 7, pp. 42-50, July 2012.

[Nichols,K.和V.Jacobson,“控制队列延迟”,《ACM通讯》,第55卷,第7期,第42-50页,2012年7月。

[Ren12] Ren, Y., Zhao, Y., and P. Liu, "A survey on TCP Incast in data center networks", International Journal of Communication Systems, Volumes 27, Issue 8, pages 116-117, 1990.

[Ren12]任,Y.,赵,Y.,和P.刘,“数据中心网络中TCP增量的调查”,国际通信系统杂志,第27卷,第8期,第116-117页,1990年。

[RFC768] Postel, J., "User Datagram Protocol", STD 6, RFC 768, DOI 10.17487/RFC0768, August 1980, <http://www.rfc-editor.org/info/rfc768>.

[RFC768]Postel,J.,“用户数据报协议”,STD 6,RFC 768,DOI 10.17487/RFC0768,1980年8月<http://www.rfc-editor.org/info/rfc768>.

[RFC791] Postel, J., "Internet Protocol", STD 5, RFC 791, DOI 10.17487/RFC0791, September 1981, <http://www.rfc-editor.org/info/rfc791>.

[RFC791]Postel,J.,“互联网协议”,STD 5,RFC 791,DOI 10.17487/RFC07911981年9月<http://www.rfc-editor.org/info/rfc791>.

[RFC793] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, DOI 10.17487/RFC0793, September 1981, <http://www.rfc-editor.org/info/rfc793>.

[RFC793]Postel,J.,“传输控制协议”,标准7,RFC 793,DOI 10.17487/RFC0793,1981年9月<http://www.rfc-editor.org/info/rfc793>.

[RFC896] Nagle, J., "Congestion Control in IP/TCP Internetworks", RFC 896, DOI 10.17487/RFC0896, January 1984, <http://www.rfc-editor.org/info/rfc896>.

[RFC896]Nagle,J.,“IP/TCP互联网中的拥塞控制”,RFC 896,DOI 10.17487/RFC0896,1984年1月<http://www.rfc-editor.org/info/rfc896>.

[RFC970] Nagle, J., "On Packet Switches With Infinite Storage", RFC 970, DOI 10.17487/RFC0970, December 1985, <http://www.rfc-editor.org/info/rfc970>.

[RFC970]Nagle,J.,“具有无限存储的分组交换机”,RFC 970,DOI 10.17487/RFC0970,1985年12月<http://www.rfc-editor.org/info/rfc970>.

[RFC1122] Braden, R., Ed., "Requirements for Internet Hosts - Communication Layers", STD 3, RFC 1122, DOI 10.17487/RFC1122, October 1989, <http://www.rfc-editor.org/info/rfc1122>.

[RFC1122]Braden,R.,Ed.“互联网主机的要求-通信层”,STD 3,RFC 1122,DOI 10.17487/RFC1122,1989年10月<http://www.rfc-editor.org/info/rfc1122>.

[RFC1633] Braden, R., Clark, D., and S. Shenker, "Integrated Services in the Internet Architecture: an Overview", RFC 1633, DOI 10.17487/RFC1633, June 1994, <http://www.rfc-editor.org/info/rfc1633>.

[RFC1633]Braden,R.,Clark,D.,和S.Shenker,“互联网体系结构中的综合服务:概述”,RFC 1633,DOI 10.17487/RFC1633,1994年6月<http://www.rfc-editor.org/info/rfc1633>.

[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, "Recommendations on Queue Management and Congestion Avoidance in the Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, <http://www.rfc-editor.org/info/rfc2309>.

[RFC2309]Braden,B.,Clark,D.,Crowcroft,J.,Davie,B.,Deering,S.,Estrin,D.,Floyd,S.,Jacobson,V.,Minshall,G.,Partridge,C.,Peterson,L.,Ramakrishnan,K.,Shenker,S.,Wroclawski,J.,and L.Zhang,“关于互联网中队列管理和拥塞避免的建议”,RFC 2309,DOI 10.17487/RFC2309,1998年4月, <http://www.rfc-editor.org/info/rfc2309>.

[RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460, December 1998, <http://www.rfc-editor.org/info/rfc2460>.

[RFC2460]Deering,S.和R.Hinden,“互联网协议,第6版(IPv6)规范”,RFC 2460,DOI 10.17487/RFC2460,1998年12月<http://www.rfc-editor.org/info/rfc2460>.

[RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, "Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers", RFC 2474, DOI 10.17487/RFC2474, December 1998, <http://www.rfc-editor.org/info/rfc2474>.

[RFC2474]Nichols,K.,Blake,S.,Baker,F.,和D.Black,“IPv4和IPv6报头中区分服务字段(DS字段)的定义”,RFC 2474,DOI 10.17487/RFC2474,1998年12月<http://www.rfc-editor.org/info/rfc2474>.

[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., and W. Weiss, "An Architecture for Differentiated Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, <http://www.rfc-editor.org/info/rfc2475>.

[RFC2475]Blake,S.,Black,D.,Carlson,M.,Davies,E.,Wang,Z.,和W.Weiss,“差异化服务架构”,RFC 2475,DOI 10.17487/RFC2475,1998年12月<http://www.rfc-editor.org/info/rfc2475>.

[RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, RFC 2914, DOI 10.17487/RFC2914, September 2000, <http://www.rfc-editor.org/info/rfc2914>.

[RFC2914]Floyd,S.,“拥塞控制原则”,BCP 41,RFC 2914,DOI 10.17487/RFC2914,2000年9月<http://www.rfc-editor.org/info/rfc2914>.

[RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram Congestion Control Protocol (DCCP)", RFC 4340, DOI 10.17487/RFC4340, March 2006, <http://www.rfc-editor.org/info/rfc4340>.

[RFC4340]Kohler,E.,Handley,M.和S.Floyd,“数据报拥塞控制协议(DCCP)”,RFC 4340,DOI 10.17487/RFC4340,2006年3月<http://www.rfc-editor.org/info/rfc4340>.

[RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", RFC 4960, DOI 10.17487/RFC4960, September 2007, <http://www.rfc-editor.org/info/rfc4960>.

[RFC4960]Stewart,R.,Ed.“流控制传输协议”,RFC 4960,DOI 10.17487/RFC4960,2007年9月<http://www.rfc-editor.org/info/rfc4960>.

[RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP Friendly Rate Control (TFRC): Protocol Specification", RFC 5348, DOI 10.17487/RFC5348, September 2008, <http://www.rfc-editor.org/info/rfc5348>.

[RFC5348]Floyd,S.,Handley,M.,Padhye,J.,和J.Widmer,“TCP友好速率控制(TFRC):协议规范”,RFC 5348,DOI 10.17487/RFC5348,2008年9月<http://www.rfc-editor.org/info/rfc5348>.

[RFC5559] Eardley, P., Ed., "Pre-Congestion Notification (PCN) Architecture", RFC 5559, DOI 10.17487/RFC5559, June 2009, <http://www.rfc-editor.org/info/rfc5559>.

[RFC5559]Eardley,P.,Ed.,“拥塞前通知(PCN)体系结构”,RFC 5559,DOI 10.17487/RFC5559,2009年6月<http://www.rfc-editor.org/info/rfc5559>.

[RFC6057] Bastian, C., Klieber, T., Livingood, J., Mills, J., and R. Woundy, "Comcast's Protocol-Agnostic Congestion Management System", RFC 6057, DOI 10.17487/RFC6057, December 2010, <http://www.rfc-editor.org/info/rfc6057>.

[RFC6057]Bastian,C.,Klieber,T.,Livingood,J.,Mills,J.,和R.Woundy,“康卡斯特的协议不可知拥塞管理系统”,RFC 6057,DOI 10.17487/RFC6057,2010年12月<http://www.rfc-editor.org/info/rfc6057>.

[RFC6789] Briscoe, B., Ed., Woundy, R., Ed., and A. Cooper, Ed., "Congestion Exposure (ConEx) Concepts and Use Cases", RFC 6789, DOI 10.17487/RFC6789, December 2012, <http://www.rfc-editor.org/info/rfc6789>.

[RFC6789]Briscoe,B.,Ed.,Woundy,R.,Ed.,和A.Cooper,Ed.,“拥塞暴露(ConEx)概念和用例”,RFC 6789,DOI 10.17487/RFC6789,2012年12月<http://www.rfc-editor.org/info/rfc6789>.

[RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, "Low Extra Delay Background Transport (LEDBAT)", RFC 6817, DOI 10.17487/RFC6817, December 2012, <http://www.rfc-editor.org/info/rfc6817>.

[RFC6817]Shalunov,S.,Hazel,G.,Iyengar,J.,和M.Kuehlewind,“低额外延迟背景传输(LEDBAT)”,RFC 6817,DOI 10.17487/RFC6817,2012年12月<http://www.rfc-editor.org/info/rfc6817>.

[RFC7414] Duke, M., Braden, R., Eddy, W., Blanton, E., and A. Zimmermann, "A Roadmap for Transmission Control Protocol (TCP) Specification Documents", RFC 7414, DOI 10.17487/RFC7414, February 2015, <http://www.rfc-editor.org/info/rfc7414>.

[RFC7414]杜克,M.,布拉登,R.,艾迪,W.,布兰顿,E.,和A.齐默尔曼,“传输控制协议(TCP)规范文件路线图”,RFC 7414,DOI 10.17487/RFC7414,2015年2月<http://www.rfc-editor.org/info/rfc7414>.

[Shr96] Shreedhar, M. and G. Varghese, "Efficient Fair Queueing Using Deficit Round Robin", IEEE/ACM Transactions on Networking, Vol. 4, No. 3, July 1996.

[Shr96]Shreedhar,M.和G.Varghese,“使用赤字循环的有效公平排队”,IEEE/ACM网络交易,第4卷,第3期,1996年7月。

[Sto97] Stoica, I. and H. Zhang, "A Hierarchical Fair Service Curve algorithm for Link sharing, real-time and priority services", ACM SIGCOMM, 1997.

[Sto97]Stoica,I.和H.Zhang,“链路共享、实时和优先级服务的分层公平服务曲线算法”,ACM SIGCOMM,1997。

[Sut99] Suter, B., "Buffer Management Schemes for Supporting TCP in Gigabit Routers with Per-flow Queueing", IEEE Journal on Selected Areas in Communications, Vol. 17, Issue 6, pp. 1159-1169, June 1999.

[Sut99]Suter,B.,“在每流排队的千兆路由器中支持TCP的缓冲区管理方案”,IEEE通信选定领域杂志,第17卷,第6期,第1159-1169页,1999年6月。

[Willinger95] Willinger, W., Taqqu, M., Sherman, R., Wilson, D., and V. Jacobson, "Self-Similarity Through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level", SIGCOMM Symposium proceedings on Communications architectures and protocols, August 1995.

[Willinger95]Willinger,W.,Taqqu,M.,Sherman,R.,Wilson,D.,和V.Jacobson,“通过高可变性的自相似性:源级以太网LAN流量的统计分析”,SIGCOM通信架构和协议研讨会论文集,1995年8月。

[Zha90] Zhang, L. and D. Clark, "Oscillating Behavior of Network Traffic: A Case Study Simulation", 1990, <http://groups.csail.mit.edu/ana/Publications/Zhang-DDC-Oscillating-Behavior-of-Network-Traffic-1990.pdf>.

[Zha90]Zhang,L.和D.Clark,“网络流量的振荡行为:案例研究模拟”,1990年<http://groups.csail.mit.edu/ana/Publications/Zhang-DDC-Oscillating-Behavior-of-Network-Traffic-1990.pdf>.

Acknowledgements

致谢

The original draft of this document describing best current practice was based on [RFC2309], an Informational RFC. It was written by the End-to-End Research Group, which is to say Bob Braden, Dave Clark, Jon Crowcroft, Bruce Davie, Steve Deering, Deborah Estrin, Sally Floyd, Van Jacobson, Greg Minshall, Craig Partridge, Larry Peterson, KK Ramakrishnan, Scott Shenker, John Wroclawski, and Lixia Zhang. Although there are important differences, many of the key arguments in the present document remain unchanged from those in RFC 2309.

本文件的原始草案描述了当前最佳实践,其依据是[RFC2309],一份信息性RFC。它是由端到端研究小组撰写的,即鲍勃·布拉登、戴夫·克拉克、乔恩·克罗克罗夫特、布鲁斯·戴维斯、史蒂夫·迪林、黛博拉·埃斯特林、莎莉·弗洛伊德、范·雅各布森、格雷格·明索尔、克雷格·帕特里奇、拉里·彼得森、KK·罗摩克里希南、斯科特·申克、约翰·沃克罗夫斯基和张丽霞。尽管存在重要差异,但本文件中的许多关键论点与RFC 2309中的论点保持不变。

The need for an updated document was agreed to in the TSV area meeting at IETF 86. This document was reviewed on the aqm@ietf.org list. Comments were received from Colin Perkins, Richard Scheffenegger, Dave Taht, John Leslie, David Collier-Brown, and many others.

IETF 86的TSV区域会议同意需要更新文件。本文件已在现场审查aqm@ietf.org列表科林·珀金斯、理查德·谢弗内格、戴夫·塔特、约翰·莱斯利、大卫·科利尔·布朗和许多其他人发表了评论。

Gorry Fairhurst was in part supported by the European Community under its Seventh Framework Programme through the Reducing Internet Transport Latency (RITE) project (ICT-317700).

Gorry Fairhurst通过减少互联网传输延迟(RITE)项目(ICT-317700),部分得到了欧洲共同体第七框架计划的支持。

Authors' Addresses

作者地址

Fred Baker (editor) Cisco Systems Santa Barbara, California 93117 United States

弗雷德·贝克(编辑)美国加利福尼亚州圣巴巴拉思科系统公司93117

   Email: fred@cisco.com
        
   Email: fred@cisco.com
        

Godred Fairhurst (editor) University of Aberdeen School of Engineering Fraser Noble Building Aberdeen, Scotland AB24 3UE United Kingdom

Godred Fairhurst(编辑)阿伯丁大学工程学院弗雷泽贵族建筑苏格兰阿伯丁英国AB24 3UE

   Email: gorry@erg.abdn.ac.uk
   URI:   http://www.erg.abdn.ac.uk
        
   Email: gorry@erg.abdn.ac.uk
   URI:   http://www.erg.abdn.ac.uk