Internet Engineering Task Force (IETF)                          F. Baker
Request for Comments: 7806                                        R. Pan
Category: Informational                                    Cisco Systems
ISSN: 2070-1721                                               April 2016
        
Internet Engineering Task Force (IETF)                          F. Baker
Request for Comments: 7806                                        R. Pan
Category: Informational                                    Cisco Systems
ISSN: 2070-1721                                               April 2016
        

On Queuing, Marking, and Dropping

关于排队、标记和丢弃

Abstract

摘要

This note discusses queuing and marking/dropping algorithms. While these algorithms may be implemented in a coupled manner, this note argues that specifications, measurements, and comparisons should decouple the different algorithms and their contributions to system behavior.

本说明讨论排队和标记/丢弃算法。虽然这些算法可以以耦合的方式实现,但本文认为规范、测量和比较应该将不同的算法及其对系统行为的贡献解耦。

Status of This Memo

关于下段备忘

This document is not an Internet Standards Track specification; it is published for informational purposes.

本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.

本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 5741第2节。

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7806.

有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc7806.

Copyright Notice

版权公告

Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.

版权所有(c)2016 IETF信托基金和确定为文件作者的人员。版权所有。

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。

Table of Contents

目录

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Fair Queuing: Algorithms and History  . . . . . . . . . . . .   3
     2.1.  Generalized Processor Sharing . . . . . . . . . . . . . .   3
       2.1.1.  GPS Comparisons: Transmission Quanta  . . . . . . . .   4
       2.1.2.  GPS Comparisons: Flow Definition  . . . . . . . . . .   4
       2.1.3.  GPS Comparisons: Unit of Measurement  . . . . . . . .   5
     2.2.  GPS Approximations  . . . . . . . . . . . . . . . . . . .   5
       2.2.1.  Definition of a Queuing Algorithm . . . . . . . . . .   5
       2.2.2.  Round-Robin Models  . . . . . . . . . . . . . . . . .   6
       2.2.3.  Calendar Queue Models . . . . . . . . . . . . . . . .   7
       2.2.4.  Work-Conserving Models and Stochastic Fairness
               Queuing . . . . . . . . . . . . . . . . . . . . . . .   9
       2.2.5.  Non-Work-Conserving Models and Virtual Clock  . . . .   9
   3.  Queuing, Marking, and Dropping  . . . . . . . . . . . . . . .  10
     3.1.  Queuing with Tail Mark/Drop . . . . . . . . . . . . . . .  11
     3.2.  Queuing with CoDel Mark/Drop  . . . . . . . . . . . . . .  11
     3.3.  Queuing with RED or PIE Mark/Drop . . . . . . . . . . . .  11
   4.  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . .  12
   5.  Security Considerations . . . . . . . . . . . . . . . . . . .  13
   6.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     6.1.  Normative References  . . . . . . . . . . . . . . . . . .  13
     6.2.  Informative References  . . . . . . . . . . . . . . . . .  13
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  15
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  16
        
   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Fair Queuing: Algorithms and History  . . . . . . . . . . . .   3
     2.1.  Generalized Processor Sharing . . . . . . . . . . . . . .   3
       2.1.1.  GPS Comparisons: Transmission Quanta  . . . . . . . .   4
       2.1.2.  GPS Comparisons: Flow Definition  . . . . . . . . . .   4
       2.1.3.  GPS Comparisons: Unit of Measurement  . . . . . . . .   5
     2.2.  GPS Approximations  . . . . . . . . . . . . . . . . . . .   5
       2.2.1.  Definition of a Queuing Algorithm . . . . . . . . . .   5
       2.2.2.  Round-Robin Models  . . . . . . . . . . . . . . . . .   6
       2.2.3.  Calendar Queue Models . . . . . . . . . . . . . . . .   7
       2.2.4.  Work-Conserving Models and Stochastic Fairness
               Queuing . . . . . . . . . . . . . . . . . . . . . . .   9
       2.2.5.  Non-Work-Conserving Models and Virtual Clock  . . . .   9
   3.  Queuing, Marking, and Dropping  . . . . . . . . . . . . . . .  10
     3.1.  Queuing with Tail Mark/Drop . . . . . . . . . . . . . . .  11
     3.2.  Queuing with CoDel Mark/Drop  . . . . . . . . . . . . . .  11
     3.3.  Queuing with RED or PIE Mark/Drop . . . . . . . . . . . .  11
   4.  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . .  12
   5.  Security Considerations . . . . . . . . . . . . . . . . . . .  13
   6.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     6.1.  Normative References  . . . . . . . . . . . . . . . . . .  13
     6.2.  Informative References  . . . . . . . . . . . . . . . . .  13
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  15
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  16
        
1. Introduction
1. 介绍

In the discussion of Active Queue Management (AQM), there has been discussion of the coupling of queue management algorithms such as Stochastic Fairness Queuing [SFQ], Virtual Clock [VirtualClock], or Deficit Round Robin [DRR] with mark/drop algorithms such as Controlled Delay (CoDel) [DELAY-AQM] or Proportional Integral controller Enhanced (PIE) [AQM-PIE]. In the interest of clarifying the discussion, we document possible implementation approaches to that and analyze the possible effects and side effects. The language and model derive from the Architecture for Differentiated Services [RFC2475].

在主动队列管理(AQM)的讨论中,已经讨论了诸如随机公平队列[SFQ]、虚拟时钟[VirtualClock]或赤字循环[DRR]等队列管理算法与诸如受控延迟(CoDel)[Delay-AQM]或比例积分控制器增强(PIE)等标记/丢弃算法的耦合[AQM-PIE]。为了澄清讨论,我们记录了可能的实现方法,并分析了可能的影响和副作用。语言和模型源自差异化服务体系结构[RFC2475]。

This note is informational and is intended to describe reasonable possibilities without constraining outcomes. This is not so much about "right" or "wrong" as it is "what might be reasonable" and discusses several possible implementation strategies. Also, while queuing might be implemented in almost any layer, the note specifically addresses queues that might be used in the Differentiated Services Architecture and are therefore at or below the IP layer.

本说明仅供参考,旨在描述合理的可能性,而不限制结果。这与其说是“对”或“错”,不如说是“什么可能是合理的”,并讨论了几种可能的实施策略。此外,虽然队列可以在几乎任何层中实现,但本说明专门针对可能在区分服务体系结构中使用的队列进行了说明,因此这些队列位于或低于IP层。

2. Fair Queuing: Algorithms and History
2. 公平排队:算法与历史

There is extensive history in the set of algorithms collectively referred to as "fair queuing". The model was initially discussed in [RFC970], which proposed it hypothetically as a solution to the TCP Silly Window Syndrome issue in BSD 4.1. The problem was that, due to a TCP implementation bug, some senders would settle into sending a long stream of very short segments, which unnecessarily consumed bandwidth on TCP and IP headers and occupied short packet buffers, thereby disrupting competing sessions. Nagle suggested that if packet streams were sorted by their source address and the sources treated in a round-robin fashion, a sender's effect on end-to-end latency and increased loss rate would primarily affect only itself. This touched off perhaps a decade of work by various researchers on what was and is termed "fair queuing", philosophical discussions of the meaning of the word "fair", operational reasons that one might want a "weighted" or "predictably unfair" queuing algorithm, and so on.

在统称为“公平排队”的一组算法中有着广泛的历史。该模型最初在[RFC970]中进行了讨论,该模型假设将其作为BSD 4.1中TCP愚蠢窗口综合征问题的解决方案。问题是,由于TCP实现错误,一些发送者会选择发送很短的段的长流,这会不必要地消耗TCP和IP头上的带宽并占用短数据包缓冲区,从而中断竞争会话。Nagle建议,如果分组流按源地址排序,并且源以循环方式处理,则发送方对端到端延迟和丢失率增加的影响主要只会影响其自身。这可能引发了许多研究人员十年的工作,他们研究了什么是“公平排队”,对“公平”一词的含义进行了哲学讨论,提出了可能需要“加权”或“可预测的不公平”排队算法的操作原因,等等。

2.1. Generalized Processor Sharing
2.1. 广义处理器共享

Conceptually, any fair queuing algorithm attempts to implement some approximation to the Generalized Processor Sharing [GPS] model.

从概念上讲,任何公平排队算法都试图实现与广义处理器共享[GPS]模型的某种近似。

The GPS model, in its essence, presumes that a set of identified data streams, called "flows", pass through an interface. Each flow has a rate when measured over a period of time; a voice session might, for example, require 64 kbit/s plus whatever overhead is necessary to deliver it, and a TCP session might have variable throughput depending on where it is in its evolution. The premise of Generalized Processor Sharing is that on all time scales, the flow occupies a predictable bit rate so that if there is enough bandwidth for the flow in the long term, it also lacks nothing in the short term. "All time scales" is obviously untenable in a packet network -- and even in a traditional Time-Division Multiplexer (TDM) circuit switch network -- because a timescale shorter than the duration of a packet will only see one packet at a time. However, it provides an ideal for other models to be compared against.

GPS模型本质上假定一组已识别的数据流(称为“流”)通过接口。当在一段时间内测量时,每个流量都有一个速率;例如,语音会话可能需要64kbit/s加上传输所需的任何开销,而TCP会话可能具有可变的吞吐量,这取决于其演进的位置。广义处理器共享的前提是,在所有时间尺度上,流占用可预测的比特率,因此,如果有足够的带宽供流长期使用,那么它在短期内也不会缺少任何东西。“所有时间尺度”在分组网络中显然是站不住脚的——甚至在传统的时分多路复用器(TDM)电路交换网络中也是如此——因为比分组持续时间短的时间尺度一次只能看到一个分组。然而,它为其他模型提供了一个比较理想的选择。

There are a number of attributes of approximations to the GPS model that bear operational consideration, including at least the transmission quanta, the definition of a "flow", and the unit of measurement. Implementation approaches have different practical impacts as well.

GPS模型有许多近似属性,需要考虑操作因素,至少包括传输量、“流量”的定义和测量单位。实施方法也有不同的实际影响。

2.1.1. GPS Comparisons: Transmission Quanta
2.1.1. GPS比较:传输量子

The most obvious comparison between the GPS model and common approximations to it is that real world data is not delivered uniformly, but in some quantum. The smallest quantum in a packet network is a packet. But quanta can be larger; for example, in video applications, it is common to describe data flow in frames per second, where a frame describes a picture on a screen or the changes made from a previous one. A single video frame is commonly on the order of tens of packets. If a codec is delivering thirty frames per second, it is conceivable that the packets comprising a frame might be sent as thirty bursts per second, with each burst sent at the interface rate of the camera or other sender. Similarly, TCP exchanges have an initial window (common values of which include 1, 2, 3, 4 [RFC3390], and 10 [RFC6928]), and there are also reports of bursts of 64 KB at the relevant Maximum Segment Size (MSS), which is to say about 45 packets in one burst, presumably coming from TCP Segment Offload ((TSO) also called TCP Offload Engine (TOE)) engines (at least one implementation is known to be able to send a burst of 256 KB). After that initial burst, TCP senders commonly send pairs of packets but may send either smaller or larger bursts [RFC5690].

GPS模型与常用近似模型之间最明显的比较是,真实世界的数据并非均匀地传递,而是以某种量子方式传递。分组网络中最小的量子是分组。但量子可以更大;例如,在视频应用程序中,通常以每秒帧数来描述数据流,其中帧描述屏幕上的图片或对前一帧所做的更改。单个视频帧通常有几十个数据包。如果编解码器每秒传送三十帧,则可以想象,包括帧的分组可以每秒三十个突发发送,其中每个突发以相机或其他发送器的接口速率发送。类似地,TCP交换有一个初始窗口(其公共值包括1、2、3、4[RFC3390]和10[RFC6928]),并且也有相关最大段大小(MSS)下64 KB突发的报告,也就是说一个突发中大约有45个包,可能来自TCP段卸载((TSO)也称为TCP卸载引擎(TOE))引擎(已知至少有一个实现能够发送256 KB的突发数据)。在初始突发之后,TCP发送方通常发送成对的数据包,但可以发送较小或较大的突发[RFC5690]。

2.1.2. GPS Comparisons: Flow Definition
2.1.2. GPS比较:流量定义

An important engineering trade-off relevant to GPS is the definition of a "flow". A flow is, by definition, a defined data stream. Common definitions include:

与GPS相关的一个重要工程权衡是“流量”的定义。根据定义,流是已定义的数据流。常见的定义包括:

o packets in a single transport layer session ("microflow"), identified by a five-tuple [RFC2990];

o 单个传输层会话(“微流”)中的数据包,由五元组[RFC2990]标识;

o packets between a single pair of addresses, identified by a source and destination address or prefix;

o 由源地址和目的地址或前缀标识的一对地址之间的数据包;

o packets from a single source address or prefix [RFC970];

o 来自单个源地址或前缀[RFC970]的数据包;

o packets to a single destination address or prefix; and

o 发送到单个目的地址或前缀的数据包;和

o packets to or from a single subscriber, customer, or peer [RFC6057]. In Service Provider operations, this might be a neighboring Autonomous System; in broadband, this might be a residential customer.

o 与单个订户、客户或对等方之间的数据包[RFC6057]。在服务提供商操作中,这可能是一个相邻的自治系统;在宽带领域,这可能是一个住宅客户。

The difference should be apparent. Consider a comparison between sorting by source address or destination address, to pick two examples, in the case that a given router interface has N application sessions going through it between N/2 local destinations and N remote sources. Sorting by source, or in this case by source/destination

差别应该是显而易见的。考虑在源地址或目的地址排序之间的比较,在给定路由器接口在N/2本地目的地和N个远程源之间通过N个应用会话的情况下选择两个例子。按来源排序,或在本例中按来源/目的地排序

pair, would give each remote peer an upper-bound guarantee of 1/N of the available capacity, which might be distributed very unevenly among the local destinations. Sorting by destination would give each local destination an upper-bound guarantee of 2/N of the available capacity, which might be distributed very unevenly among the remote systems and correlated sessions. Who is one fair to? In both cases, they deliver equal service by their definition, but that might not be someone else's definition.

pair将为每个远程对等方提供1/N可用容量的上限保证,这可能在本地目的地之间分布非常不均匀。按目的地排序将为每个本地目的地提供2/N可用容量的上限保证,这可能在远程系统和相关会话之间分布非常不均匀。对谁公平?在这两种情况下,他们根据自己的定义提供平等的服务,但这可能不是其他人的定义。

Flow fairness, and the implications of TCP's congestion avoidance algorithms, is discussed extensively in [NoFair].

[NoFair]中广泛讨论了流公平性以及TCP拥塞避免算法的含义。

2.1.3. GPS Comparisons: Unit of Measurement
2.1.3. 全球定位系统比较:计量单位

And finally, there is the question of what is measured for rate. If the only objective is to force packet streams to not dominate each other, it is sufficient to count packets. However, if the issue is the bit rate of a Service Level Agreement (SLA), one must consider the sizes of the packets (the aggregate throughput of a flow measured in bits or bytes). If predictable unfairness is a consideration, the value must be weighted accordingly.

最后,还有一个问题,即如何衡量利率。如果唯一的目标是强制数据包流不相互支配,那么对数据包进行计数就足够了。然而,如果问题是服务水平协议(SLA)的比特率,则必须考虑分组的大小(以比特或字节测量的流量的总吞吐量)。如果考虑可预测的不公平性,则必须对该值进行相应加权。

[RFC7141] discusses measurement.

[RFC7141]讨论了测量。

2.2. GPS Approximations
2.2. GPS近似值

Carrying the matter further, a queuing algorithm may also be termed "work conserving" or "non work conserving". A queue in a work-conserving algorithm, by definition, is either empty, in which case no attempt is being made to dequeue data from it, or contains something, in which case the algorithm continuously tries to empty the queue. A work-conserving queue that contains queued data at an interface with a given rate will deliver data at that rate until it empties. A non-work-conserving queue might stop delivering even though it still contains data. A common reason for doing this is to impose an artificial upper bound on a class of traffic that is lower than the rate of the underlying physical interface.

更进一步,排队算法也可以被称为“工作守恒”或“非工作守恒”。根据定义,工作节约算法中的队列要么是空的,在这种情况下,没有尝试从队列中取出数据,要么包含某些内容,在这种情况下,算法会不断尝试清空队列。保存工作的队列包含以给定速率在接口处排队的数据,该队列将以该速率传递数据,直到其清空。非工作保存队列可能会停止传递,即使它仍然包含数据。这样做的一个常见原因是对一类低于底层物理接口速率的流量施加人为上限。

2.2.1. Definition of a Queuing Algorithm
2.2.1. 排队算法的定义

In the discussion following, we assume a basic definition of a queuing algorithm. A queuing algorithm has, at minimum:

在下面的讨论中,我们假设一个排队算法的基本定义。排队算法至少具有:

o some form of internal storage for the elements kept in the queue;

o 队列中元素的某种形式的内部存储;

o if it has multiple internal classifications, then it has

o 如果它有多个内部分类,那么它有

* a method for classifying elements and

* 一种元素和元素的分类方法

* additional storage for the classifier and implied classes;

* 分类器和隐含类的额外存储;

o potentially, a method for creating the queue;

o 潜在地,用于创建队列的方法;

o potentially, a method for destroying the queue;

o 潜在地,用于销毁队列的方法;

o an enqueuing method for placing packets into the queue or queuing system; and

o 一种排队方法,用于将分组放入队列或排队系统中;和

o a dequeuing method for removing packets from the queue or queuing system.

o 从队列或排队系统中删除数据包的一种出列方法。

There may also be other information or methods, such as the ability to inspect the queue. It also often has inspectable external attributes, such as the total volume of packets or bytes in queue, and may have limit thresholds, such as a maximum number of packets or bytes the queue might hold.

还可能有其他信息或方法,例如检查队列的能力。它通常还具有可检查的外部属性,例如队列中的数据包或字节的总容量,并且可能具有限制阈值,例如队列可能持有的数据包或字节的最大数量。

For example, a simple FIFO queue has a linear data structure, enqueues packets at the tail, and dequeues packets from the head. It might have a maximum queue depth and a current queue depth maintained in packets or bytes.

例如,一个简单的FIFO队列具有线性数据结构,在尾部对数据包排队,从头部对数据包排队。它可能有一个最大队列深度和一个以数据包或字节为单位的当前队列深度。

2.2.2. Round-Robin Models
2.2.2. 循环模型

One class of implementation approaches, generically referred to as "Weighted Round Robin" (WRR), implements the structure of the queue as an array or ring of subqueues associated with flows for whatever definition of a flow is important.

一类实现方法,通常称为“加权循环”(WRR),将队列结构实现为与流相关联的子队列数组或环,无论流的定义如何重要。

The arriving packet must, of course, first be classified. If a hash is used as a classifier, the hash result might be used as an array index, selecting the subqueue that the packet will go into. One can imagine other classifiers, such as using a Differentiated Services Code Point (DSCP) value as an index into an array containing the queue number for a flow, or more complex access list implementations.

当然,到达的数据包必须首先进行分类。如果哈希用作分类器,则哈希结果可能用作数组索引,选择数据包将进入的子队列。可以想象其他分类器,例如使用区分服务代码点(Differentied Services Code Point,DSCP)值作为包含流的队列号的数组的索引,或者更复杂的访问列表实现。

In any event, a subqueue contains the traffic for a flow, and data is sent from each subqueue in succession.

在任何情况下,子队列都包含流的流量,并且数据会连续地从每个子队列发送。

Upon entering the queue, the enqueue method places a classified packet into a simple FIFO subqueue.

进入队列后,排队方法将分类数据包放入一个简单的FIFO子队列。

On dequeue, the subqueues are searched in round-robin order, and when a subqueue is identified that contains data, the dequeue method removes a specified quantum of data from it. That quantum is at minimum a packet, but it may be more. If the system is intended to maintain a byte rate, there will be memory between searches of the excess previously dequeued.

在出列时,子队列将按循环顺序进行搜索,当识别出包含数据的子队列时,出列方法将从中删除指定数量的数据。这个量子至少是一个数据包,但它可能更多。如果系统打算保持字节率,则在搜索之前出列的多余字节之间会有内存。

                            +-+
                          +>|1|
                          | +-+
                          |  |
                          | +-+               +-+
                          | |1|             +>|3|
                          | +-+             | +-+
                          |  |              |  |
                          | +-+      +-+    | +-+
                          | |1|    +>|2|    | |3|
                          | +-+    | +-+    | +-+
                          |  A     |  A     |  A
                          |  |     |  |     |  |
                         ++--++   ++--++   ++--++
                      +->| Q  |-->| Q  |-->| Q  |--+
                      |  +----+   +----+   +----+  |
                      +----------------------------+
        
                            +-+
                          +>|1|
                          | +-+
                          |  |
                          | +-+               +-+
                          | |1|             +>|3|
                          | +-+             | +-+
                          |  |              |  |
                          | +-+      +-+    | +-+
                          | |1|    +>|2|    | |3|
                          | +-+    | +-+    | +-+
                          |  A     |  A     |  A
                          |  |     |  |     |  |
                         ++--++   ++--++   ++--++
                      +->| Q  |-->| Q  |-->| Q  |--+
                      |  +----+   +----+   +----+  |
                      +----------------------------+
        

Figure 1: Round-Robin Queues

图1:循环队列

2.2.3. Calendar Queue Models
2.2.3. 日历队列模型

Another class of implementation approaches, generically referred to as Calendar Queue Implementations [CalendarQueue], implements the structure of the queue as an array or ring of subqueues (often called "buckets") associated with time or sequence; each bucket contains the set of packets, which may be null, intended to be sent at a certain time or following the emptying of the previous bucket. The queue structure includes a look-aside table that indicates the current depth (which is to say, the next bucket) of any given class of traffic, which might similarly be identified using a hash, a DSCP, an access list, or any other classifier. Conceptually, the queues each contain zero or more packets from each class of traffic. One is the queue being emptied "now"; the rest are associated with some time or sequence in the future. The characteristics under "load" have been investigated in [Deadline].

另一类实现方法,通常称为日历队列实现[CalendarQueue],将队列的结构实现为与时间或序列相关联的子队列(通常称为“bucket”)的数组或环;每个bucket都包含一组数据包,这些数据包可能为空,打算在某个时间或在前一个bucket清空后发送。队列结构包括指示任何给定业务类别的当前深度(即,下一个bucket)的旁侧表,其可以类似地使用散列、DSCP、访问列表或任何其他分类器来识别。从概念上讲,每个队列都包含来自每类流量的零个或多个数据包。一种是“现在”清空队列;其余部分与将来的某个时间或顺序有关。“负载”下的特性已在[截止日期]中进行了研究。

Upon entering the queue, the enqueue method, considering a classified packet, determines the current depth of that class with a view to scheduling it for transmission at some time or sequence in the future. If the unit of scheduling is a packet and the queuing

在进入队列时,排队方法考虑分类数据包,确定该类的当前深度,以便在将来的某个时间或序列对其进行传输调度。如果调度的单位是数据包和队列

quantum is one packet per subqueue, a burst of packets arrives in a given flow, and if at the start the flow has no queued data, the first packet goes into the "next" queue, the second into its successor, and so on. If there was some data in the class, the first packet in the burst would go into the bucket pointed to by the look-aside table. If the unit of scheduling is time, the explanation in Section 2.2.5 might be simplest to follow, but the bucket selected will be the bucket corresponding to a given transmission time in the future. A necessary side effect, memory being finite, is that there exist a finite number of "future" buckets. If enough traffic arrives to cause a class to wrap, one is forced to drop something (tail drop).

quantum是每个子队列一个数据包,一个数据包突发到达一个给定的流中,如果在流开始时没有排队的数据,第一个数据包进入“下一个”队列,第二个数据包进入其后续队列,依此类推。如果类中有一些数据,突发中的第一个数据包将进入look aside表所指向的bucket。如果调度的单位是时间,则第2.2.5节中的解释可能是最简单的,但选择的存储桶将是与未来给定传输时间对应的存储桶。内存是有限的,一个必要的副作用是存在有限数量的“未来”存储桶。如果有足够的通信量到达,导致类被包装,则必须丢弃某些内容(尾部丢弃)。

On dequeue, the buckets are searched at their stated times or in their stated sequence, and when a bucket is identified that contains data, the dequeue method removes a specified quantum of data from it and, by extension, from the associated traffic classes. A single bucket might contain data from a number of classes simultaneously.

出列时,将在指定的时间或顺序搜索存储桶,当识别出包含数据的存储桶时,出列方法将从存储桶中删除指定数量的数据,并通过扩展从关联的流量类中删除指定数量的数据。单个bucket可能同时包含来自多个类的数据。

                             +-+
                           +>|1|
                           | +-+
                           |  |
                           | +-+      +-+
                           | |2|    +>|2|
                           | +-+    | +-+
                           |  |     |  |
                           | +-+    | +-+      +-+
                           | |3|    | |1|    +>|1|
                           | +-+    | +-+    | +-+
                           |  A     |  A     |  A
                           |  |     |  |     |  |
                          ++--++   ++--++   ++--++
                  "now"+->| Q  |-->| Q  |-->| Q  |-->...
                          +----+   +----+   +----+
                             A       A         A
                             |3      |2        |1
                          +++++++++++++++++++++++
                          ||||     Flow      ||||
                          +++++++++++++++++++++++
        
                             +-+
                           +>|1|
                           | +-+
                           |  |
                           | +-+      +-+
                           | |2|    +>|2|
                           | +-+    | +-+
                           |  |     |  |
                           | +-+    | +-+      +-+
                           | |3|    | |1|    +>|1|
                           | +-+    | +-+    | +-+
                           |  A     |  A     |  A
                           |  |     |  |     |  |
                          ++--++   ++--++   ++--++
                  "now"+->| Q  |-->| Q  |-->| Q  |-->...
                          +----+   +----+   +----+
                             A       A         A
                             |3      |2        |1
                          +++++++++++++++++++++++
                          ||||     Flow      ||||
                          +++++++++++++++++++++++
        

Figure 2: Calendar Queue

图2:日历队列

In any event, a subqueue contains the traffic for a point in time or a point in sequence, and data is sent from each subqueue in succession. If subqueues are associated with time, an interesting end case develops: if the system is draining a given subqueue and the time of the next subqueue arrives, what should the system do? One

在任何情况下,子队列都包含某个时间点或某个序列点的通信量,并且从每个子队列连续发送数据。如果子队列与时间相关联,则会出现一个有趣的最终情况:如果系统正在排空给定的子队列,并且下一个子队列的时间到达,那么系统应该做什么?一

potentially valid line of reasoning would have it continue delivering the data in the present queue on the assumption that it will likely trade off for time in the next. Another potentially valid line of reasoning would have it discard any waiting data in the present queue and move to the next.

潜在有效的推理路线是,假设它可能会在下一个队列中权衡时间,它将继续在当前队列中交付数据。另一种可能有效的推理方式是,它将丢弃当前队列中等待的任何数据,并移动到下一个队列。

2.2.4. Work-Conserving Models and Stochastic Fairness Queuing
2.2.4. 工作守恒模型与随机公平排队

Stochastic Fairness Queuing [SFQ] is an example of a work-conserving algorithm. This algorithm measures packets and considers a "flow" to be an equivalence class of traffic defined by a hashing algorithm over the source and destination IPv4 addresses. As packets arrive, the enqueue method performs the indicated hash and places the packet into the indicated subqueue. The dequeue method operates as described in Section 2.2.2; subqueues are inspected in round-robin sequence and a packet is removed if they contain one or more packets.

随机公平队列[SFQ]是一种节省工作的算法。该算法测量数据包,并将“流”视为源IPv4地址和目标IPv4地址上的哈希算法定义的流量的等价类。当数据包到达时,排队方法执行指示的哈希并将数据包放入指示的子队列。出列方法的操作如第2.2.2节所述;子队列按循环顺序检查,如果包含一个或多个数据包,则删除数据包。

The Deficit Round Robin [DRR] model modifies the quanta to bytes and deals with variable length packets. A subqueue descriptor contains a waiting quantum (the amount intended to be dequeued on the previous dequeue attempt that was not satisfied), a per-round quantum (the subqueue is intended to dequeue a certain number of bytes each round), and a maximum to permit (some multiple of the MTU). In each dequeue attempt, the dequeue method sets the waiting quantum to the smaller of the maximum quantum and the sum of the waiting and incremental quantum. It then dequeues up to the waiting quantum (in bytes) of packets in the queue and reduces the waiting quantum by the number of bytes dequeued. Since packets will not normally be exactly the size of the quantum, some dequeue attempts will dequeue more than others, but they will over time average the incremental quantum per round if there is data present.

赤字循环[DRR]模型将量子修改为字节,并处理可变长度的数据包。子队列描述符包含一个等待量(在上一次未满足的出列尝试中打算出列的量)、一个每轮量(子队列打算每轮出列一定数量的字节)和一个允许的最大值(MTU的某些倍数)。在每次出列尝试中,出列方法将等待量设置为最大量和等待量与增量量之和中的较小者。然后,它将排队到队列中数据包的等待量(以字节为单位),并通过排队的字节数减少等待量。由于数据包通常不会精确到量子的大小,因此一些出列尝试将比其他尝试出列更多,但随着时间的推移,如果存在数据,它们将平均每轮增量量子。

[SFQ] and [DRR] could be implemented as described in Section 2.2.3. The weakness of a classical WRR approach is the search time expended inspecting and not choosing sub-queues that contain no data or not enough to trigger a transmission from them.

[SFQ]和[DRR]可按第2.2.3节所述实施。经典WRR方法的缺点是搜索时间花费在检查和选择不包含数据或不足以触发传输的子队列上。

2.2.5. Non-Work-Conserving Models and Virtual Clock
2.2.5. 非功守恒模型与虚拟时钟

Virtual Clock [VirtualClock] is an example of a non-work-conserving algorithm. It is trivially implemented as described in Section 2.2.3. It associates buckets with intervals in time that have durations on the order of microseconds to tens of milliseconds. Each flow is assigned a rate in bytes per interval. The flow entry maintains a point in time the "next" packet in the flow should be scheduled.

虚拟时钟[VirtualClock]是一种非工作守恒算法的示例。如第2.2.3节所述,它的实现非常简单。它将存储桶与持续时间为微秒到几十毫秒的时间间隔相关联。每个流都被分配了一个以字节为单位的速率。流条目保持流中“下一个”数据包应该被调度的时间点。

On enqueue, the method determines whether the "next schedule" time is "in the past"; if so, the packet is scheduled "now", and if not, the packet is scheduled at that time. It then calculates the new "next schedule" time as the current "next schedule" time plus the length of the packet divided by the rate. If the resulting time is also in the past, the "next schedule" time is set to "now"; otherwise, it is set to the calculated time. As noted in Section 2.2.3, there is an interesting point regarding "too much time in the future"; if a packet is scheduled too far into the future, it may be marked or dropped in the AQM procedure, and if it runs beyond the end of the queuing system, may be defensively tail dropped.

排队时,该方法确定“下一个计划”时间是否为“过去”;如果是,则分组“现在”被调度,如果不是,则分组在该时间被调度。然后,它将新的“下一个计划”时间计算为当前“下一个计划”时间加上数据包长度除以速率。如果产生的时间也在过去,“下一个计划”时间设置为“现在”;否则,将其设置为计算时间。如第2.2.3节所述,关于“未来的时间太长”有一个有趣的观点;如果一个数据包被安排在太远的将来,它可能会在AQM过程中被标记或丢弃,如果它运行在队列系统的末端之外,它可能会被防御性地尾部丢弃。

On dequeue, the bucket associated with the time "now" is inspected. If it contains a packet, the packet is dequeued and transmitted. If the bucket is empty and the time for the next bucket has not arrived, the system waits, even if there is a packet in the next bucket. As noted in Section 2.2.3, there is an interesting point regarding the queue associated with "now". If a subsequent bucket, even if it is actually empty, would be delayed by the transmission of a packet, one could imagine marking the packet Explicit Congestion Notification - Congestion Experienced (ECN-CE) [RFC3168] [RFC6679] or dropping the packet.

在退出队列时,将检查与时间“现在”关联的存储桶。如果它包含一个数据包,则该数据包将退出队列并传输。如果bucket为空且下一个bucket的时间尚未到达,则系统将等待,即使下一个bucket中存在数据包。如第2.2.3节所述,关于与“现在”相关联的队列,有一个有趣的问题。如果后续的bucket(即使它实际上是空的)将由于数据包的传输而延迟,可以想象标记数据包显式拥塞通知-经历拥塞(ECN-CE)[RFC3168][RFC6679]或丢弃数据包。

3. Queuing, Marking, and Dropping
3. 排队、标记和丢弃

Queuing, marking, and dropping are integrated in any system that has a queue. If nothing else, as memory is finite, a system has to drop as discussed in Sections 2.2.3 and 2.2.5 in order to protect itself. However, host transports interpret drops as signals, so AQM algorithms use that as a mechanism to signal.

排队、标记和丢弃集成在任何具有队列的系统中。如果没有其他原因,因为内存是有限的,那么系统必须按照第2.2.3节和第2.2.5节中的讨论进行下降,以保护自身。然而,主机传输将丢包解释为信号,因此AQM算法将其用作发送信号的机制。

It is useful to think of the effects of queuing as a signal as well. The receiver sends acknowledgements as data is received, so the arrival of acknowledgements at the sender paces the sender at approximately the average rate it is able to achieve through the network. This is true even if the sender keeps an arbitrarily large amount of data stored in network queues and is the basis for delay-based congestion control algorithms. So, delaying a packet momentarily in order to permit another session to improve its operation has the effect of signaling a slightly lower capacity to the sender.

将排队的影响视为一种信号也是很有用的。接收方在接收数据时发送确认,因此,确认到达发送方时,发送方的速度约为其通过网络能够达到的平均速度。即使发送方将任意数量的数据存储在网络队列中,这也是事实,并且是基于延迟的拥塞控制算法的基础。因此,为了允许另一个会话改进其操作而暂时延迟一个数据包,其效果是向发送方发送稍低容量的信号。

3.1. Queuing with Tail Mark/Drop
3.1. 带尾部标记/下降的排队

In the default case in which a FIFO queue is used with defensive tail drop only, the effect is to signal to the sender in two ways:

在默认情况下,FIFO队列仅与防御性尾部丢弃一起使用,其效果是通过两种方式向发送方发送信号:

o Ack clocking, which involves pacing the sender to send at approximately the rate it can deliver data to the receiver; and

o Ack时钟,包括调整发送方的发送速度,使其以大约能向接收方发送数据的速率发送数据;和

o Defensive loss, which is when a sender sends faster than available capacity (such as by probing network capacity when fully utilizing that capacity) and overburdens a queue.

o 防御性丢失,即发送方发送的速度超过可用容量(如在充分利用该容量时探测网络容量),并使队列负担过重。

3.2. Queuing with CoDel Mark/Drop
3.2. 带代码标记/丢弃的排队

In any case wherein a queuing algorithm is used along with CoDel [DELAY-AQM], the sequence of events is that a packet is time stamped, enqueued, dequeued, compared to a subsequent reading of the clock, and then acted on, whether by dropping it, marking and forwarding it, or simply forwarding it. This is to say that the only drop algorithm inherent in queuing is the defensive drop when the queue's resources are overrun. However, the intention of marking or dropping is to signal to the sender much earlier when a certain amount of delay has been observed. In a FIFO+CoDel, Virtual Clock+CoDel, or FlowQueue-Codel [FQ-CODEL] implementation, the queuing algorithm is completely separate from the AQM algorithm. Using them in series results in four signals to the sender:

在排队算法与CoDel[DELAY-AQM]一起使用的任何情况下,事件序列是,与时钟的后续读取相比,对数据包进行时间戳、排队、退队,然后采取行动,无论是丢弃、标记和转发数据包,还是简单地转发数据包。也就是说,队列中唯一固有的丢弃算法是队列资源溢出时的防御丢弃。然而,标记或丢弃的目的是在观察到一定程度的延迟时更早地向发送者发出信号。在FIFO+CoDel、虚拟时钟+CoDel或FlowQueue CoDel[FQ-CoDel]实现中,排队算法与AQM算法完全分离。串联使用它们会向发送器发送四个信号:

o Ack clocking, which involves pacing the sender to send at approximately the rate it can deliver data to the receiver through a queue;

o Ack时钟,其中包括调整发送方的发送速度,使其能够通过队列向接收方发送数据;

o Lossless signaling that a certain delay threshold has been reached, if ECN [RFC3168] [RFC6679] is in use;

o 如果ECN[RFC3168][RFC6679]正在使用,则表明已达到某个延迟阈值的无损信令;

o Intentional signaling via loss that a certain delay threshold has been reached, if ECN is not in use; and

o 如果未使用ECN,则通过丢失发送已达到某个延迟阈值的有意信号;和

o Defensive loss, which is when a sender sends faster than available capacity (such as by probing network capacity when fully utilizing that capacity) and overburdens a queue.

o 防御性丢失,即发送方发送的速度超过可用容量(如在充分利用该容量时探测网络容量),并使队列负担过重。

3.3. Queuing with RED or PIE Mark/Drop
3.3. 红色或饼状标记排队/下降

In any case wherein a queuing algorithm is used along with PIE [AQM-PIE], Random Early Detection (RED) [RFC7567], or other such algorithms, the sequence of events is that a queue is inspected, a packet is dropped, marked, or left unchanged, enqueued, dequeued, compared to a subsequent reading of the clock, and then forwarded on.

在排队算法与PIE[AQM-PIE]、随机早期检测(RED)[RFC7567]或其他此类算法一起使用的任何情况下,事件序列是检查队列,丢弃、标记或保持数据包不变,排队、退队,与时钟的后续读取相比较,然后转发。

This is to say that the AQM Mark/Drop Algorithm precedes enqueue; if it has not been effective and as a result the queue is out of resources anyway, the defensive drop algorithm steps in, and failing that, the queue operates in whatever way it does. Hence, in a FIFO+PIE, SFQ+PIE, or Virtual Clock+PIE implementation, the queuing algorithm is again completely separate from the AQM algorithm. Using them in series results in four signals to the sender:

也就是说,AQM标记/丢弃算法先于排队;如果该方法无效,导致队列资源不足,那么防御丢弃算法将介入,如果失败,队列将以任何方式运行。因此,在FIFO+PIE、SFQ+PIE或虚拟时钟+PIE实现中,排队算法再次与AQM算法完全分离。串联使用它们会向发送器发送四个信号:

o Ack clocking, which involves pacing the sender to send at approximately the rate it can deliver data to the receiver through a queue;

o Ack时钟,其中包括调整发送方的发送速度,使其能够通过队列向接收方发送数据;

o Lossless signaling that a queue depth that corresponds to a certain delay threshold has been reached, if ECN is in use;

o 如果ECN正在使用,则发出无损信号,表明已达到对应于某个延迟阈值的队列深度;

o Intentional signaling via loss that a queue depth that corresponds to a certain delay threshold has been reached, if ECN is not in use; and

o 如果ECN未被使用,则通过丢失发送有意信号,表明已达到对应于某个延迟阈值的队列深度;和

o Defensive loss, which is when a sender sends faster than available capacity (such as by probing network capacity when fully utilizing that capacity) and overburdens a queue.

o 防御性丢失,即发送方发送的速度超过可用容量(如在充分利用该容量时探测网络容量),并使队列负担过重。

4. Conclusion
4. 结论

To summarize, in Section 2, implementation approaches for several classes of queuing algorithms were explored. Queuing algorithms such as SFQ, Virtual Clock, and FlowQueue-Codel [FQ-CODEL] have value in the network in that they delay packets to enforce a rate upper bound or to permit competing flows to compete more effectively. ECN marking and loss are also useful signals if used in a manner that enhances TCP / Steam Control Transmission Protocol (SCTP) operation or restrains unmanaged UDP data flows.

总之,在第2节中,探讨了几类排队算法的实现方法。诸如SFQ、虚拟时钟和FlowQueue Codel[FQ-Codel]等排队算法在网络中具有价值,因为它们延迟数据包以强制执行速率上限或允许竞争流更有效地竞争。如果以增强TCP/蒸汽控制传输协议(SCTP)操作或限制非托管UDP数据流的方式使用,ECN标记和丢失也是有用的信号。

Conceptually, queuing algorithms and mark/drop algorithms operate in series (as discussed in Section 3), not as a single algorithm. The observed effects differ: defensive loss protects the intermediate system and provides a signal, AQM mark/drop works to reduce mean latency, and the scheduling of flows works to modify flow interleave and acknowledgement pacing. Certain features like flow isolation are provided by fair-queuing-related designs but are not the effect of the mark/drop algorithm.

从概念上讲,排队算法和标记/删除算法是串联运行的(如第3节所述),而不是作为单个算法。观察到的效果不同:防御丢失保护中间系统并提供信号,AQM标记/丢弃工作以减少平均延迟,流调度工作以修改流交织和确认起搏。某些功能(如流隔离)由公平排队相关设计提供,但不是标记/丢弃算法的效果。

There is value in implementing and coupling the operation of both queuing algorithms and queue management algorithms, and there is definitely interesting research in this area, but specifications, measurements, and comparisons should decouple the different algorithms and their contributions to system behavior.

实现和耦合队列算法和队列管理算法的操作是有价值的,在这方面肯定有有趣的研究,但是规范、度量和比较应该将不同的算法及其对系统行为的贡献解耦。

5. Security Considerations
5. 安全考虑

This memo adds no new security issues; it observes implementation strategies for Diffserv implementation.

这份备忘录没有增加新的安全问题;它观察Diffserv实现的实现策略。

6. References
6. 工具书类
6.1. Normative References
6.1. 规范性引用文件

[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., and W. Weiss, "An Architecture for Differentiated Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, <http://www.rfc-editor.org/info/rfc2475>.

[RFC2475]Blake,S.,Black,D.,Carlson,M.,Davies,E.,Wang,Z.,和W.Weiss,“差异化服务架构”,RFC 2475,DOI 10.17487/RFC2475,1998年12月<http://www.rfc-editor.org/info/rfc2475>.

6.2. Informative References
6.2. 资料性引用

[AQM-PIE] Pan, R., Natarajan, P., and F. Baker, "PIE: A Lightweight Control Scheme To Address the Bufferbloat Problem", Work in Progress, draft-ietf-aqm-pie-06, April 2016.

[AQM-PIE]Pan,R.,Natarajan,P.,和F.Baker,“PIE:解决缓冲区膨胀问题的轻量级控制方案”,正在进行的工作,草案-ietf-AQM-PIE-06,2016年4月。

[CalendarQueue] Brown, R., "Calendar queues: a fast 0(1) priority queue implementation for the simulation event set problem", Communications of the ACM Volume 21, Issue 10, pp. 1220-1227, DOI 10.1145/63039.63045, October 1988, <http://dl.acm.org/citation.cfm?id=63045>.

[CalendarQueue]Brown,R.,“日历队列:模拟事件集问题的快速0(1)优先级队列实现”,《ACM通讯》第21卷,第10期,第1220-1227页,DOI 10.1145/63039.63045,1988年10月<http://dl.acm.org/citation.cfm?id=63045>.

[Deadline] Kruk, L., Lohoczky, J., Ramanan, K., and S. Shreve, "Heavy Traffic Analysis For EDF Queues With Reneging", The Annals of Applied Probability Volume 21, Issue No. 2, pp. 484-545, DOI 10.1214/10-AAP681, 2011, <http://www.math.cmu.edu/users/shreve/Reneging.pdf>.

[截止日期]Kruk,L.,Lohoczky,J.,Ramanan,K.,和S.Shreve,“违约EDF队列的重流量分析”,《应用概率年鉴》第21卷,第2期,第484-545页,DOI 10.1214/10-AAP681,2011年<http://www.math.cmu.edu/users/shreve/Reneging.pdf>.

[DELAY-AQM] Nichols, K., Jacobson, V., McGregor, A., and J. Iyengar, "Controlled Delay Active Queue Management", Work in Progress, draft-ietf-aqm-codel-03, March 2016.

[DELAY-AQM]Nichols,K.,Jacobson,V.,McGregor,A.,和J.Iyengar,“受控延迟主动队列管理”,正在进行的工作,草案-ietf-AQM-codel-032016年3月。

[DRR] Shreedhar, M. and G. Varghese, "Efficient fair queuing using deficit round-robin", IEEE/ACM Transactions on Networking Volume 4, Issue 3, pp. 375-385, DOI 10.1109/90.502236, June 1996, <http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=502236>.

[DRR]Shreedhar,M.和G.Varghese,“使用赤字循环的有效公平排队”,IEEE/ACM网络交易卷4,第3期,第375-385页,DOI 10.1109/90.50226,1996年6月<http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=502236>。

[FQ-CODEL] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, J., and E. Dumazet, "The FlowQueue-CoDel Packet Scheduler and Active Queue Management Algorithm", Work in Progress, draft-ietf-aqm-fq-codel-06, March 2016.

[FQ-CODEL]Hoeiland Joergensen,T.,McKenney,P.,Taht,D.,Gettys,J.,和E.Dumazet,“FlowQueue CODEL数据包调度器和主动队列管理算法”,正在进行的工作,草案-ietf-aqm-FQ-CODEL-06,2016年3月。

[GPS] Demers, A., University of California, Berkeley, and Xerox PARC, "Analysis and Simulation of a Fair Queueing Algorithm", ACM SIGCOMM Computer Communication Review, Volume 19, Issue 4, pp. 1-12, DOI 10.1145/75247.75248, September 1989, <http://blizzard.cs.uwaterloo.ca/keshav/home/Papers/ data/89/fq.pdf>.

德默斯,A,加利福尼亚大学,伯克利和施乐PARC,“公平排队算法的分析和仿真”,ACM SIGCOMM计算机通信评论,第19卷,第4期,第1-12页,DOI 101145/7247.7248,1989年9月,<http://blizzard.cs.uwaterloo.ca/keshav/home/Papers/ data/89/fq.pdf>。

[NoFair] Briscoe, B., "Flow rate fairness: dismantling a religion", ACM SIGCOMM Computer Communication Review, Volume 37, Issue 2, pp. 63-74, DOI 10.1145/1232919.1232926, April 2007, <http://dl.acm.org/citation.cfm?id=1232926>.

[NoFair]Briscoe,B.,“流量公平:摧毁宗教”,《ACM SIGCOMM计算机通信评论》,第37卷,第2期,第63-74页,DOI 10.1145/1232919.12329262007年4月<http://dl.acm.org/citation.cfm?id=1232926>.

[RFC970] Nagle, J., "On Packet Switches With Infinite Storage", RFC 970, DOI 10.17487/RFC0970, December 1985, <http://www.rfc-editor.org/info/rfc970>.

[RFC970]Nagle,J.,“具有无限存储的分组交换机”,RFC 970,DOI 10.17487/RFC0970,1985年12月<http://www.rfc-editor.org/info/rfc970>.

[RFC2990] Huston, G., "Next Steps for the IP QoS Architecture", RFC 2990, DOI 10.17487/RFC2990, November 2000, <http://www.rfc-editor.org/info/rfc2990>.

[RFC2990]Huston,G.,“IP QoS架构的下一步”,RFC 2990,DOI 10.17487/RFC2990,2000年11月<http://www.rfc-editor.org/info/rfc2990>.

[RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September 2001, <http://www.rfc-editor.org/info/rfc3168>.

[RFC3168]Ramakrishnan,K.,Floyd,S.,和D.Black,“向IP添加显式拥塞通知(ECN)”,RFC 3168,DOI 10.17487/RFC3168,2001年9月<http://www.rfc-editor.org/info/rfc3168>.

[RFC3390] Allman, M., Floyd, S., and C. Partridge, "Increasing TCP's Initial Window", RFC 3390, DOI 10.17487/RFC3390, October 2002, <http://www.rfc-editor.org/info/rfc3390>.

[RFC3390]奥尔曼,M.,弗洛伊德,S.,和C.帕特里奇,“增加TCP的初始窗口”,RFC 3390,DOI 10.17487/RFC3390,2002年10月<http://www.rfc-editor.org/info/rfc3390>.

[RFC5690] Floyd, S., Arcia, A., Ros, D., and J. Iyengar, "Adding Acknowledgement Congestion Control to TCP", RFC 5690, DOI 10.17487/RFC5690, February 2010, <http://www.rfc-editor.org/info/rfc5690>.

[RFC5690]Floyd,S.,Arcia,A.,Ros,D.,和J.Iyengar,“将确认拥塞控制添加到TCP”,RFC 5690,DOI 10.17487/RFC5690,2010年2月<http://www.rfc-editor.org/info/rfc5690>.

[RFC6057] Bastian, C., Klieber, T., Livingood, J., Mills, J., and R. Woundy, "Comcast's Protocol-Agnostic Congestion Management System", RFC 6057, DOI 10.17487/RFC6057, December 2010, <http://www.rfc-editor.org/info/rfc6057>.

[RFC6057]Bastian,C.,Klieber,T.,Livingood,J.,Mills,J.,和R.Woundy,“康卡斯特的协议不可知拥塞管理系统”,RFC 6057,DOI 10.17487/RFC6057,2010年12月<http://www.rfc-editor.org/info/rfc6057>.

[RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., and K. Carlberg, "Explicit Congestion Notification (ECN) for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 2012, <http://www.rfc-editor.org/info/rfc6679>.

[RFC6679]Westerlund,M.,Johansson,I.,Perkins,C.,O'Hanlon,P.,和K.Carlberg,“UDP上RTP的显式拥塞通知(ECN)”,RFC 6679,DOI 10.17487/RFC66792012年8月<http://www.rfc-editor.org/info/rfc6679>.

[RFC6928] Chu, J., Dukkipati, N., Cheng, Y., and M. Mathis, "Increasing TCP's Initial Window", RFC 6928, DOI 10.17487/RFC6928, April 2013, <http://www.rfc-editor.org/info/rfc6928>.

[RFC6928]Chu,J.,Dukkipati,N.,Cheng,Y.,和M.Mathis,“增加TCP的初始窗口”,RFC 6928,DOI 10.17487/RFC6928,2013年4月<http://www.rfc-editor.org/info/rfc6928>.

[RFC7141] Briscoe, B. and J. Manner, "Byte and Packet Congestion Notification", BCP 41, RFC 7141, DOI 10.17487/RFC7141, February 2014, <http://www.rfc-editor.org/info/rfc7141>.

[RFC7141]Briscoe,B.和J.Way,“字节和数据包拥塞通知”,BCP 41,RFC 7141,DOI 10.17487/RFC7141,2014年2月<http://www.rfc-editor.org/info/rfc7141>.

[RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF Recommendations Regarding Active Queue Management", BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, <http://www.rfc-editor.org/info/rfc7567>.

[RFC7567]Baker,F.,Ed.和G.Fairhurst,Ed.,“IETF关于主动队列管理的建议”,BCP 197,RFC 7567,DOI 10.17487/RFC7567,2015年7月<http://www.rfc-editor.org/info/rfc7567>.

[SFQ] Mckenney, P., "Stochastic Fairness Queuing", Proceedings of IEEE INFOCOM '90, Volume 2, pp. 733-740, DOI 10.1109/INFCOM.1990.91316, June 1990, <http://www2.rdrop.com/~paulmck/scalability/paper/ sfq.2002.06.04.pdf>.

[SFQ]Mckenney,P.,“随机公平排队”,《IEEE INFOCOM'90会议录》,第2卷,第733-740页,DOI 10.1109/INFCOM.1990.91316,1990年6月<http://www2.rdrop.com/~paulmck/scalability/paper/sfq.2002.06.04.pdf>。

[VirtualClock] Zhang, L., "VirtualClock: A New Traffic Control Algorithm for Packet Switching Networks", Proceedings of the ACM Symposium on Communications Architectures and Protocols, Volume 20, DOI 10.1145/99508.99525, September 1990, <http://dl.acm.org/citation.cfm?id=99508.99525>.

[VirtualLock]Zhang,L.,“VirtualLock:一种新的分组交换网络流量控制算法”,《ACM通信体系结构和协议研讨会论文集》,第20卷,DOI 10.1145/99508.99525,1990年9月<http://dl.acm.org/citation.cfm?id=99508.99525>.

Acknowledgements

致谢

This note grew out of, and is in response to, mailing list discussions in AQM, in which some have pushed an algorithm to compare to AQM marking and dropping algorithms, but which includes flow queuing.

本说明源于AQM中的邮件列表讨论,并作为对这些讨论的回应。在这些讨论中,一些人推动了一种算法,以与AQM标记和丢弃算法进行比较,但其中包括流队列。

Authors' Addresses

作者地址

Fred Baker Cisco Systems Santa Barbara, California 93117 United States

美国加利福尼亚州圣巴巴拉市弗雷德·贝克思科系统公司,邮编:93117

   Email: fred@cisco.com
        
   Email: fred@cisco.com
        

Rong Pan Cisco Systems Milpitas, California 95035 United States

美国加利福尼亚州米尔皮塔斯市荣潘思科系统公司95035

   Email: ropan@cisco.com
        
   Email: ropan@cisco.com