Network Working Group W. Lai Request for Comments: 4128 AT&T Labs Category: Informational June 2005
Network Working Group W. Lai Request for Comments: 4128 AT&T Labs Category: Informational June 2005
Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engineering: Performance Evaluation
区分服务(Diffserv)感知MPLS流量工程的带宽约束模型:性能评估
Status of This Memo
关于下段备忘
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
本备忘录为互联网社区提供信息。它没有规定任何类型的互联网标准。本备忘录的分发不受限制。
Copyright Notice
版权公告
Copyright (C) The Internet Society (2005).
版权所有(C)互联网协会(2005年)。
IESG Note
IESG注释
The content of this RFC has been considered by the IETF (specifically in the TE-WG working group, which has no problem with publication as an Informational RFC), and therefore it may resemble a current IETF work in progress or a published IETF work. However, this document is an individual submission and not a candidate for any level of Internet Standard. The IETF disclaims any knowledge of the fitness of this RFC for any purpose, and in particular notes that it has not had complete IETF review for such things as security, congestion control or inappropriate interaction with deployed protocols. The RFC Editor has chosen to publish this document at its discretion. Readers of this RFC should exercise caution in evaluating its value for implementation and deployment. See RFC 3932 for more information.
IETF(特别是TE-WG工作组)已经考虑了本RFC的内容,该工作组没有问题将其作为信息RFC发布,因此它可能类似于当前正在进行的IETF工作或已发布的IETF工作。但是,本文件仅为个人提交,不适用于任何级别的互联网标准。IETF不承认任何关于本RFC适用于任何目的的知识,并特别指出,IETF尚未对安全、拥塞控制或与已部署协议的不当交互等事项进行完整的IETF审查。RFC编辑已自行决定发布本文件。本RFC的读者应谨慎评估其实施和部署价值。有关更多信息,请参阅RFC 3932。
Abstract
摘要
"Differentiated Services (Diffserv)-aware MPLS Traffic Engineering Requirements", RFC 3564, specifies the requirements and selection criteria for Bandwidth Constraints Models. Two such models, the Maximum Allocation and the Russian Dolls, are described therein. This document complements RFC 3564 by presenting the results of a performance evaluation of these two models under various operational conditions: normal load, overload, preemption fully or partially enabled, pure blocking, or complete sharing.
“区分服务(Diffserv)-感知MPLS流量工程要求”,RFC 3564规定了带宽约束模型的要求和选择标准。其中描述了两个这样的模型,最大分配和俄罗斯玩偶。本文档通过介绍这两种模型在各种操作条件下的性能评估结果来补充RFC 3564:正常负载、过载、完全或部分启用抢占、纯阻塞或完全共享。
Table of Contents
目录
1. Introduction ....................................................3 1.1. Conventions used in this document ..........................4 2. Bandwidth Constraints Models ....................................4 3. Performance Model ...............................................5 3.1. LSP Blocking and Preemption ................................6 3.2. Example Link Traffic Model .................................8 3.3. Performance under Normal Load ..............................9 4. Performance under Overload .....................................10 4.1. Bandwidth Sharing versus Isolation ........................10 4.2. Improving Class 2 Performance at the Expense of Class 3 ...12 4.3. Comparing Bandwidth Constraints of Different Models .......13 5. Performance under Partial Preemption ...........................15 5.1. Russian Dolls Model .......................................16 5.2. Maximum Allocation Model ..................................16 6. Performance under Pure Blocking ................................17 6.1. Russian Dolls Model .......................................17 6.2. Maximum Allocation Model ..................................18 7. Performance under Complete Sharing .............................19 8. Implications on Performance Criteria ...........................20 9. Conclusions ....................................................21 10. Security Considerations .......................................22 11. Acknowledgements ..............................................22 12. References ....................................................22 12.1. Normative References ....................................22 12.2. Informative References ..................................22
1. Introduction ....................................................3 1.1. Conventions used in this document ..........................4 2. Bandwidth Constraints Models ....................................4 3. Performance Model ...............................................5 3.1. LSP Blocking and Preemption ................................6 3.2. Example Link Traffic Model .................................8 3.3. Performance under Normal Load ..............................9 4. Performance under Overload .....................................10 4.1. Bandwidth Sharing versus Isolation ........................10 4.2. Improving Class 2 Performance at the Expense of Class 3 ...12 4.3. Comparing Bandwidth Constraints of Different Models .......13 5. Performance under Partial Preemption ...........................15 5.1. Russian Dolls Model .......................................16 5.2. Maximum Allocation Model ..................................16 6. Performance under Pure Blocking ................................17 6.1. Russian Dolls Model .......................................17 6.2. Maximum Allocation Model ..................................18 7. Performance under Complete Sharing .............................19 8. Implications on Performance Criteria ...........................20 9. Conclusions ....................................................21 10. Security Considerations .......................................22 11. Acknowledgements ..............................................22 12. References ....................................................22 12.1. Normative References ....................................22 12.2. Informative References ..................................22
Differentiated Services (Diffserv)-aware MPLS Traffic Engineering (DS-TE) mechanisms operate on the basis of different Diffserv classes of traffic to improve network performance. Requirements for DS-TE and the associated protocol extensions are specified in references [1] and [2] respectively.
区分服务(Diffserv)-感知MPLS流量工程(DS-TE)机制基于不同的区分服务流量类别来提高网络性能。参考文献[1]和[2]分别规定了DS-TE和相关协议扩展的要求。
To achieve per-class traffic engineering, rather than on an aggregate basis across all classes, DS-TE enforces different Bandwidth Constraints (BCs) on different classes. Reference [1] specifies the requirements and selection criteria for Bandwidth Constraints Models (BCMs) for the purpose of allocating bandwidth to individual classes.
为了实现每类流量工程,而不是在所有类的聚合基础上,DS-TE在不同的类上实施不同的带宽约束(BCs)。参考文献[1]规定了带宽约束模型(BCM)的要求和选择标准,以便将带宽分配给各个类别。
This document presents a performance analysis for the two BCMs described in [1]:
本文件对[1]中所述的两个BCM进行了性能分析:
(1) Maximum Allocation Model (MAM) - the maximum allowable bandwidth usage of each class, together with the aggregate usage across all classes, are explicitly specified.
(1) 最大分配模型(MAM)-明确指定每个类的最大允许带宽使用率,以及所有类的聚合使用率。
(2) Russian Dolls Model (RDM) - specification of maximum allowable usage is done cumulatively by grouping successive priority classes recursively.
(2) 俄罗斯玩偶模型(RDM)-通过递归地对连续的优先级类进行分组,以累积方式指定最大允许使用量。
The following criteria are also listed in [1] for investigating the performance and trade-offs of different operational aspects of BCMs:
[1]中还列出了以下标准,用于调查BCM不同运行方面的性能和权衡:
(1) addresses the scenarios in Section 2 of [1]
(1) 解决[1]第2节中的场景
(2) works well under both normal and overload conditions
(2) 在正常和过载条件下均可正常工作
(3) applies equally when preemption is either enabled or disabled
(3) 在启用或禁用抢占时同样适用
(4) minimizes signaling load processing requirements
(4) 最小化信令负载处理需求
(5) maximizes efficient use of the network
(5) 最大限度地提高网络的使用效率
(6) minimizes implementation and deployment complexity
(6) 最小化实现和部署的复杂性
The use of any given BCM has significant impacts on the capability of a network to provide protection for different classes of traffic, particularly under high load, so that performance objectives can be met [3]. This document complements [1] by presenting the results of a performance evaluation of the above two BCMs under various operational conditions: normal load, overload, preemption fully or partially enabled, pure blocking, or complete sharing. Thus, our focus is only on the performance-oriented criteria and their
任何给定BCM的使用都会对网络为不同类别的流量提供保护的能力产生重大影响,特别是在高负载下,从而可以实现性能目标[3]。本文件补充了[1],介绍了上述两种BCM在各种运行条件下的性能评估结果:正常负载、过载、完全或部分启用抢占、纯阻塞或完全共享。因此,我们只关注以性能为导向的标准及其应用
implications for a network implementation. In other words, we are only concerned with criteria (2), (3), and (5); we will not address criteria (1), (4), or (6).
对网络实现的影响。换言之,我们只关注标准(2)、(3)和(5);我们将不讨论标准(1)、(4)或(6)。
Related documents in this area include [4], [5], [6], [7], and [8].
该领域的相关文献包括[4]、[5]、[6]、[7]和[8]。
In the rest of this document, the following DS-TE acronyms are used:
在本文件的其余部分中,使用了以下DS-TE首字母缩略词:
BC Bandwidth Constraint BCM Bandwidth Constraints Model MAM Maximum Allocation Model RDM Russian Dolls Model
BC带宽约束BCM带宽约束模型MAM最大分配模型RDM俄罗斯娃娃模型
There may be differences between the quality of service expressed and obtained with Diffserv without DS-TE and with DS-TE. Because DS-TE uses Constraint Based Routing, and because of the type of admission control capabilities it adds to Diffserv, DS-TE has capabilities for traffic that Diffserv does not. Diffserv does not indicate preemption, by intent, whereas DS-TE describes multiple levels of preemption for its Class-Types. Also, Diffserv does not support any means of explicitly controlling overbooking, while DS-TE allows this. When considering a complete quality of service environment, with Diffserv routers and DS-TE, it is important to consider these differences carefully.
在使用不带DS-TE的Diffserv和使用DS-TE的Diffserv表示和获得的服务质量之间可能存在差异。由于DS-TE使用基于约束的路由,并且由于它添加到Diffserv的准入控制功能的类型,DS-TE具有Diffserv所没有的流量功能。Diffserv无意表示抢占,而DS-TE为其类类型描述了多级抢占。此外,Diffserv不支持任何显式控制超售的方法,而DS-TE允许这样做。当考虑到一个完整的服务质量环境时,使用DiffServ路由器和DS-TE,仔细考虑这些差异是很重要的。
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
本文件中的关键词“必须”、“不得”、“要求”、“应”、“不得”、“应”、“不应”、“建议”、“可”和“可选”应按照RFC 2119中的说明进行解释。
To simplify our presentation, we use the informal name "class of traffic" for the terms Class-Type and TE-Class, defined in [1]. We assume that (1) there are only three classes of traffic, and that (2) all label-switched paths (LSPs), regardless of class, require the same amount of bandwidth. Furthermore, the focus is on the bandwidth usage of an individual link with a given capacity; routing aspects of LSP setup are not considered.
为了简化我们的演示,我们在[1]中定义的术语class Type和TE class中使用非正式名称“class of traffic”。我们假设(1)只有三类流量,(2)所有标签交换路径(LSP),无论类别如何,都需要相同的带宽。此外,重点是具有给定容量的单个链路的带宽使用;不考虑LSP设置的路由方面。
The concept of reserved bandwidth is also defined in [1] to account for the possible use of overbooking. Rather than get into these details, we assume that each LSP is allocated 1 unit of bandwidth on a given link after establishment. This allows us to express link bandwidth usage simply in terms of the number of simultaneously established LSPs. Link capacity can then be used as the aggregate constraint on bandwidth usage across all classes.
[1]中还定义了预留带宽的概念,以说明超售的可能用途。我们假设每个LSP在建立后在给定的链路上分配1个单位的带宽,而不是深入讨论这些细节。这使我们能够简单地用同时建立的LSP的数量来表示链路带宽使用情况。然后,链路容量可以用作所有类别带宽使用的聚合约束。
Suppose that the three classes of traffic assumed above for the purposes of this document are denoted by class 1 (highest priority), class 2, and class 3 (lowest priority). When preemption is enabled, these are the preemption priorities. To define a generic class of BCMs for the purpose of our analysis in accordance with the above assumptions, let
假设为了本文档的目的,上述假定的三类流量由类别1(最高优先级)、类别2和类别3(最低优先级)表示。启用抢占时,这些是抢占优先级。为了根据上述假设定义BCM的通用类别,以便我们进行分析,让
Nmax = link capacity; i.e., the maximum number of simultaneously established LSPs for all classes together
Nmax=链路容量;i、 e.所有类同时建立的LSP的最大数量
Nc = the number of simultaneously established class c LSPs, for c = 1, 2, and 3, respectively.
Nc=同时建立的c类LSP的数量,分别为c=1、2和3。
For MAM, let
为了妈妈,让我
Bc = maximum number of simultaneously established class c LSPs.
Bc=同时建立的c类LSP的最大数量。
Then, Bc is the Bandwidth Constraint for class c, and we have
那么,Bc是c类的带宽约束,我们有
Nc <= Bc <= Nmax, for c = 1, 2, and 3 N1 + N2 + N3 <= Nmax B1 + B2 + B3 >= Nmax
Nc <= Bc <= Nmax, for c = 1, 2, and 3 N1 + N2 + N3 <= Nmax B1 + B2 + B3 >= Nmax
For RDM, the BCs are specified as:
对于RDM,BCs指定为:
B1 = maximum number of simultaneously established class 1 LSPs
B1=同时建立的1类LSP的最大数量
B2 = maximum number of simultaneously established LSPs for classes 1 and 2 together
B2=1类和2类同时建立的LSP的最大数量
B3 = maximum number of simultaneously established LSPs for classes 1, 2, and 3 together
B3=1、2和3类同时建立的LSP的最大数量
Then, we have the following relationships:
然后,我们有以下关系:
N1 <= B1 N1 + N2 <= B2 N1 + N2 + N3 <= B3 B1 < B2 < B3 = Nmax
N1 <= B1 N1 + N2 <= B2 N1 + N2 + N3 <= B3 B1 < B2 < B3 = Nmax
Reference [8] presents a 3-class Markov-chain performance model to analyze a general class of BCMs. The BCMs that can be analyzed include, besides MAM and RDM, BCMs with privately reserved bandwidth that cannot be preempted by other classes.
参考文献[8]提出了一个三级马尔可夫链性能模型,用于分析一般类别的BCM。可以分析的BCM除了MAM和RDM之外,还包括具有私人保留带宽的BCM,这些带宽不能被其他类抢占。
The Markov-chain performance model in [8] assumes Poisson arrivals for LSP requests with exponentially distributed lifetime. The Poisson assumption for LSP requests is relevant since we are not dealing with the arrivals of individual packets within an LSP. Also, LSP lifetime may exhibit heavy-tail characteristics. This effect should be accounted for when the performance of a particular BCM by itself is evaluated. As the effect would be common for all BCMs, we ignore it for simplicity in the comparative analysis of the relative performance of different BCMs. In principle, a suitably chosen hyperexponential distribution may be used to capture some aspects of heavy tail. However, this will significantly increase the complexity of the non-product-form preemption model in [8].
[8]中的马尔可夫链性能模型假设寿命呈指数分布的LSP请求的泊松到达。LSP请求的泊松假设是相关的,因为我们不处理LSP中单个数据包的到达。此外,LSP寿命可能表现出重尾特性。在评估特定车身控制模块自身的性能时,应考虑这种影响。由于该效应对所有BCM都是常见的,为了简单起见,我们在比较分析不同BCM的相对性能时忽略了它。原则上,可以使用适当选择的超指数分布来捕获重尾的某些方面。然而,这将显著增加[8]中非产品形式抢占模型的复杂性。
The model in [8] assumes the use of admission control to allocate link bandwidth to LSPs of different classes in accordance with their respective BCs. Thus, the model accepts as input the link capacity and offered load from different classes. The blocking and preemption probabilities for different classes under different BCs are generated as output. Thus, from a service provider's perspective, given the desired level of blocking and preemption performance, the model can be used iteratively to determine the corresponding set of BCs.
[8]中的模型假设使用许可控制根据不同类别的LSP各自的BCs将链路带宽分配给它们。因此,该模型接受不同类别的链路容量和提供的负载作为输入。生成不同BCs下不同类别的阻塞和抢占概率作为输出。因此,从服务提供者的角度来看,给定所需的阻塞和抢占性能级别,可以迭代地使用该模型来确定相应的bc集。
To understand the implications of using criteria (2), (3), and (5) in the Introduction Section to select a BCM, we present some numerical results of the analysis in [8]. This is intended to facilitate discussion of the issues that can arise. The major performance objective is to achieve a balance between the need for bandwidth sharing (for increasing bandwidth efficiency) and the need for bandwidth isolation (for protecting bandwidth access by different classes).
为了理解在引言部分使用标准(2)、(3)和(5)选择车身控制模块的含义,我们在[8]中给出了一些数值分析结果。这是为了促进对可能出现的问题的讨论。主要性能目标是在带宽共享需求(提高带宽效率)和带宽隔离需求(保护不同类别的带宽访问)之间实现平衡。
As described in Section 2, the three classes of traffic used as an example are class 1 (highest priority), class 2, and class 3 (lowest priority). Preemption may or may not be used, and we will examine the performance of each scenario. When preemption is used, the priorities are the preemption priorities. We consider cross-class preemption only, with no within-class preemption. In other words, preemption is enabled so that, when necessary, class 1 can preempt class 3 or class 2 (in that order), and class 2 can preempt class 3.
如第2节所述,作为示例使用的三类流量为1类(最高优先级)、2类和3类(最低优先级)。抢占可以使用,也可以不使用,我们将检查每个场景的性能。使用抢占时,优先级为抢占优先级。我们只考虑跨类抢占,没有类内抢占。换句话说,启用抢占,以便在必要时,类1可以抢占类3或类2(按该顺序),类2可以抢占类3。
Each class offers a load of traffic to the network that is expressed in terms of the arrival rate of its LSP requests and the average lifetime of an LSP. A unit of such a load is an erlang. (In packet-based networks, traffic volume is usually measured by counting the number of bytes and/or packets that are sent or received over an interface during a measurement period. Here we are only concerned
每个类向网络提供一个流量负载,该负载用其LSP请求的到达率和LSP的平均生存期表示。这种负载的一个单元是erlang。(在基于数据包的网络中,通常通过计算在测量期间通过接口发送或接收的字节数和/或数据包数来测量流量。这里我们只关心
with bandwidth allocation and usage at the LSP level. Therefore, as a measure of resource utilization in a link-speed independent manner, the erlang is an appropriate unit for our purpose [9].)
在LSP级别进行带宽分配和使用。因此,作为以链路速度无关的方式衡量资源利用率的一种方法,erlang是适合我们使用的单元[9]。)
To prevent Diffserv QoS degradation at the packet level, the expected number of established LSPs for a given class should be kept in line with the average service rate that the Diffserv scheduler can provide to that class. Because of the use of overbooking, the actual traffic carried by a link may be higher than expected, and hence QoS degradation may not be totally avoidable.
为了防止分组级别的区分服务QoS降级,给定类的预期已建立LSP数量应与区分服务调度器可提供给该类的平均服务速率保持一致。由于使用超售,链路承载的实际流量可能高于预期,因此QoS降低可能无法完全避免。
However, the use of admission control at the LSP level helps minimize QoS degradation by enforcing the BCs established for the different classes, according to the rules of the BCM adopted. That is, the BCs are used to determine the number of LSPs that can be simultaneously established for different classes under various operational conditions. By controlling the number of LSPs admitted from different classes, this in turn ensures that the amount of traffic submitted to the Diffserv scheduler is compatible with the targeted packet-level QoS objectives.
然而,根据所采用的BCM规则,在LSP级别使用准入控制通过强制为不同类别建立BCs,有助于最小化QoS降级。也就是说,BCs用于确定在各种操作条件下可同时为不同等级建立的LSP数量。通过控制从不同类别接纳的LSP的数量,这进而确保提交给Diffserv调度器的通信量与目标数据包级别的QoS目标兼容。
The performance of a BCM can therefore be measured by how well the given BCM handles the offered traffic, under normal or overload conditions, while maintaining packet-level service objectives. Thus, assuming that the enforcement of Diffserv QoS objectives by admission control is a given, the performance of a BCM can be expressed in terms of LSP blocking and preemption probabilities.
因此,车身控制模块的性能可以通过给定车身控制模块在正常或过载条件下处理所提供流量的能力来衡量,同时保持数据包级别的服务目标。因此,假设通过接纳控制实现区分服务QoS目标是给定的,则BCM的性能可以用LSP阻塞和抢占概率表示。
Different BCMs have different strengths and weaknesses. Depending on the BCs chosen for a given load, a BCM may perform well in one operating region and poorly in another. Service providers are mainly concerned with the utility of a BCM to meet their operational needs. Regardless of which BCM is deployed, the foremost consideration is that the BCM works well under the engineered load, such as the ability to deliver service-level objectives for LSP blocking probabilities. It is also expected that the BCM handles overload "reasonably" well. Thus, for comparison, the common operating point we choose for BCMs is that they meet specified performance objectives in terms of blocking/preemption under given normal load. We then observe how their performance varies under overload. More will be said about this aspect later in Section 4.2.
不同的BCM有不同的优势和劣势。根据为给定负载选择的BCs,BCM可能在一个工作区域表现良好,而在另一个工作区域表现不佳。服务提供商主要关注车身控制模块的效用,以满足其运营需求。无论部署哪种车身控制模块,最重要的考虑因素是车身控制模块在工程负载下工作良好,例如为LSP阻塞概率提供服务水平目标的能力。此外,预计车身控制模块“合理”地处理过载。因此,为了进行比较,我们为BCM选择的常见操作点是,在给定的正常负载下,它们在阻塞/抢占方面满足指定的性能目标。然后,我们观察它们在过载情况下的性能变化。有关这方面的更多信息将在后面的第4.2节中介绍。
For example, consider a link with a capacity that allows a maximum of 15 LSPs from different classes to be established simultaneously. All LSPs are assumed to have an average lifetime of 1 time unit. Suppose that this link is being offered a load of
例如,考虑一个链路,它允许最大限度地同时建立来自不同类的15个LSP。假设所有LSP的平均寿命为1个时间单位。假设此链接被提供了一个负载
2.7 erlangs from class 1, 3.5 erlangs from class 2, and 3.5 erlangs from class 3.
2.7 类1的Erlang、类2的3.5 Erlang和类3的3.5 Erlang。
We now consider a scenario wherein the blocking/preemption performance objectives for the three classes are desired to be comparable under normal conditions (other scenarios are covered in later sections). To meet this service requirement under the above given load, the BCs are selected as follows:
现在我们考虑一种场景,其中三类的阻塞/抢占性能目标在正常条件下是可比较的(其他方案在后面的章节中被覆盖)。为满足上述给定负载下的服务要求,BCs的选择如下:
For MAM:
对于MAM:
up to 6 simultaneous LSPs for class 1, up to 7 simultaneous LSPs for class 2, and up to 15 simultaneous LSPs for class 3.
1类最多6个同步LSP,2类最多7个同步LSP,3类最多15个同步LSP。
For RDM:
对于RDM:
up to 6 simultaneous LSPs for class 1 by itself, up to 11 simultaneous LSPs for classes 1 and 2 together, and up to 15 simultaneous LSPs for all three classes together.
类1本身最多可同时使用6个LSP,类1和类2最多可同时使用11个LSP,所有三个类最多可同时使用15个LSP。
Note that the driver is service requirement, independent of BCM. The above BCs are not picked arbitrarily; they are chosen to meet specific performance objectives in terms of blocking/preemption (detailed in the next section).
请注意,驾驶员是独立于车身控制模块的维修需求。上述BC不是任意选取的;选择它们是为了满足阻塞/抢占方面的特定性能目标(将在下一节中详细介绍)。
An intuitive "explanation" for the above set of BCs may be as follows. Class 1 BC is the same (6) for both models, as class 1 is treated the same way under either model with preemption. However, MAM and RDM operate in fundamentally different ways and give different treatments to classes with lower preemption priorities. It can be seen from Section 2 that although RDM imposes a strict ordering of the different BCs (B1 < B2 < B3) and a hard boundary (B3 = Nmax), MAM uses a soft boundary (B1+B2+B3 >= Nmax) with no specific ordering. As will be explained in Section 4.3, this allows RDM to have a higher degree of sharing among different classes. Such a higher degree of coupling means that the numerical values of the BCs can be relatively smaller than those for MAM, to meet given performance requirements under normal load.
上述BCs集合的直观“解释”如下所示。对于这两种模型,类1 BC是相同的(6),因为在任何一种具有抢占权的模型下,类1都以相同的方式处理。然而,MAM和RDM以根本不同的方式运行,并对抢占优先级较低的类给予不同的处理。从第2节可以看出,尽管RDM对不同的BCs(B1<B2<B3)和硬边界(B3=Nmax)施加严格的排序,但MAM使用软边界(B1+B2+B3>=Nmax),没有特定的排序。如第4.3节所述,这允许RDM在不同的类之间有更高程度的共享。这种更高程度的耦合意味着BCs的数值可以相对小于MAM的数值,以满足正常负载下的给定性能要求。
Thus, in the above example, the RDM BCs of (6, 11, 15) may be thought of as roughly corresponding to the MAM BCs of (6, 6+7, 6+7+15). (The intent here is just to point out that the design parameters for the two BCMs need to be different, as they operate differently; strictly speaking, the numerical correspondence is incorrect.) Of course, both BCMs are bounded by the same aggregate constraint of the link capacity (15).
因此,在上述示例中,可以认为(6,11,15)的RDM bc大致对应于(6,6+7,6+7+15)的MAM bc。(这里的目的只是指出,两个BCM的设计参数需要不同,因为它们的操作不同;严格来说,数字对应是不正确的。)当然,两个BCM都受到链路容量的相同聚合约束(15)的限制。
The BCs chosen in the above example are not intended to be regarded as typical values used by any service provider. They are used here mainly for illustrative purposes. The method we used for analysis can easily accommodate another set of parameter values as input.
在上述示例中选择的bc无意被视为任何服务提供商使用的典型值。它们在这里主要用于说明目的。我们用于分析的方法可以很容易地容纳另一组参数值作为输入。
In the example above, based on the BCs chosen, the blocking and preemption probabilities for LSP setup requests under normal conditions for the two BCMs are given in Table 1. Remember that the BCs have been selected for this scenario to address the service requirement to offer comparable blocking/preemption objectives for the three classes.
在上面的示例中,基于所选择的BCs,两个BCM的LSP设置请求在正常条件下的阻塞和抢占概率如表1所示。请记住,在本场景中选择BCs是为了满足服务需求,从而为三个类别提供可比的阻塞/抢占目标。
Table 1. Blocking and preemption probabilities
表1。阻塞和抢占概率
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
MAM 0.03692 0.03961 0.02384 0 0.02275 0.03961 0.04659 RDM 0.03692 0.02296 0.02402 0.01578 0.01611 0.03874 0.04013
MAM 0.03692 0.03961 0.02384 0.02275 0.03961 0.04659 RDM 0.03692 0.02296 0.02402 0.01578 0.01611 0.03874 0.04013
In the above table, the following apply:
在上表中,以下各项适用:
PB1 = blocking probability of class 1 PB2 = blocking probability of class 2 PB3 = blocking probability of class 3
PB1=第1类的阻塞概率PB2=第2类的阻塞概率PB3=第3类的阻塞概率
PP2 = preemption probability of class 2 PP3 = preemption probability of class 3
PP2=第2类的抢占概率PP3=第3类的抢占概率
PB2+PP2 = combined blocking/preemption probability of class 2 PB3+PP3 = combined blocking/preemption probability of class 3
PB2+PP2 = combined blocking/preemption probability of class 2 PB3+PP3 = combined blocking/preemption probability of class 3
First, we observe that, indeed, the values for (PB1, PB2+PP2, PB3+PP3) are very similar one to another. This confirms that the service requirement (of comparable blocking/preemption objectives for the three classes) has been met for both BCMs.
首先,我们观察到,(PB1,PB2+PP2,PB3+PP3)的值彼此非常相似。这证实了两个BCM都满足了服务要求(三个等级的可比阻塞/抢占目标)。
Then, we observe that the (PB1, PB2+PP2, PB3+PP3) values for MAM are very similar to the (PB1, PB2+PP2, PB3+PP3) values for RDM. This indicates that, in this scenario, both BCMs offer very similar performance under normal load.
然后,我们观察到MAM的(PB1,PB2+PP2,PB3+PP3)值与RDM的(PB1,PB2+PP2,PB3+PP3)值非常相似。这表明,在这种情况下,两个BCM在正常负载下提供非常相似的性能。
From column 2 of Table 1, it can be seen that class 1 sees exactly the same blocking under both BCMs. This should be obvious since both allocate up to 6 simultaneous LSPs for use by class 1 only. Slightly better results are obtained from RDM, as shown by the last two columns in Table 1. This comes about because the cascaded bandwidth separation in RDM effectively gives class 3 some form of protection from being preempted by higher-priority classes.
从表1的第2列可以看出,类1在两个BCM下看到完全相同的阻塞。这应该是显而易见的,因为两者都同时分配多达6个LSP,只供类1使用。从RDM中获得的结果稍好一些,如表1中最后两列所示。这是因为RDM中的级联带宽分离有效地为第3类提供了某种形式的保护,以防被更高优先级的类抢占。
Also, note that PP2 is zero in this particular case, simply because the BCs for MAM happen to have been chosen in such a way that class 1 never has to preempt class 2 for any of the bandwidth that class 1 needs. (This is because class 1 can, in the worst case, get all the bandwidth it needs simply by preempting class 3 alone.) In general, this will not be the case.
另外,请注意,在这种特殊情况下,PP2为零,这仅仅是因为MAM的BCs恰好是以这样的方式选择的,即类1永远不必为类1所需的任何带宽抢占类2。(这是因为在最坏的情况下,类1只需抢占类3就可以获得所需的所有带宽。)一般来说,情况并非如此。
It is interesting to compare these results with those for the case of a single class. Based on the Erlang loss formula, a capacity of 15 servers can support an offered load of 10 erlangs with a blocking probability of 0.0364969. Whereas the total load for the 3-class BCM is less with 2.7 + 3.5 + 3.5 = 9.7 erlangs, the probabilities of blocking/preemption are higher. Thus, there is some loss of efficiency due to the link bandwidth being partitioned to accommodate for different traffic classes, thereby resulting in less sharing. This aspect will be examined in more detail later, in Section 7 on Complete Sharing.
将这些结果与单个类的结果进行比较是很有趣的。根据Erlang损耗公式,15台服务器的容量可以支持10台Erlang的负载,阻塞概率为0.0364969。当2.7+3.5+3.5=9.7 erlangs时,三级车身控制模块的总负载较小,但阻塞/抢占的概率较高。因此,由于对链路带宽进行分区以适应不同的通信量类别,从而导致较少的共享,因此存在一些效率损失。稍后将在第7节“完全共享”中更详细地研究这一方面。
Overload occurs when the traffic on a system is greater than the traffic capacity of the system. To investigate the performance under overload conditions, the load of each class is varied separately. Blocking and preemption probabilities are not shown separately for each case; they are added together to yield a combined blocking/preemption probability.
当系统上的流量大于系统的流量容量时,会发生过载。为了研究过载条件下的性能,每个等级的负载分别变化。对于每种情况,阻塞和抢占概率没有单独显示;它们加在一起产生组合的阻塞/抢占概率。
Figures 1 and 2 show the relative performance when the load of each class in the example of Section 3.2 is varied separately. The three series of data in each of these figures are, respectively,
图1和图2显示了当第3.2节示例中的每类荷载分别变化时的相对性能。每个图中的三个系列数据分别为:,
class 1 blocking probability ("Class 1 B"), class 2 blocking/preemption probability ("Class 2 B+P"), and class 3 blocking/preemption probability ("Class 3 B+P").
1级阻塞概率(“1b级”)、2级阻塞/抢占概率(“2b+P级”)和3级阻塞/抢占概率(“3b+P级”)。
For each of these series, the first set of four points is for the performance when class 1 load is increased from half of its normal load to twice its normal. Similarly, the next and the last sets of four points are when class 2 and class 3 loads are increased correspondingly.
对于这些系列中的每一个,第一组四个点用于当1级负载从正常负载的一半增加到正常负载的两倍时的性能。类似地,第二组和最后一组四个点是当2级和3级荷载相应增加时。
The following observations apply to both BCMs:
以下观察结果适用于两种BCM:
1. The performance of any class generally degrades as its load increases.
1. 任何类的性能通常会随着其负载的增加而降低。
2. The performance of class 1 is not affected by any changes (increases or decreases) in either class 2 or class 3 traffic, because class 1 can always preempt others.
2. 第1类的性能不受第2类或第3类流量的任何变化(增加或减少)的影响,因为第1类始终可以抢占其他类。
3. Similarly, the performance of class 2 is not affected by any changes in class 3 traffic.
3. 同样,第2类的性能不受第3类流量变化的影响。
4. Class 3 sees better (worse) than normal performance when either class 1 or class 2 traffic is below (above) normal.
4. 当1级或2级流量低于(高于)正常值时,3级的性能比正常性能更好(更差)。
In contrast, the impact of the changes in class 1 traffic on class 2 performance is different for the two BCMs: It is negligible in MAM and significant in RDM.
相比之下,对于两个BCM,1级流量的变化对2级性能的影响是不同的:在MAM中可以忽略不计,在RDM中则非常显著。
1. Although class 2 sees little improvement (no improvement in this particular example) in performance when class 1 traffic is below normal when MAM is used, it sees better than normal performance under RDM.
1. 当使用MAM时,当第1类流量低于正常值时,第2类的性能几乎没有改善(在这个特定示例中没有改善),但在RDM下,第2类的性能要好于正常性能。
2. Class 2 sees no degradation in performance when class 1 traffic is above normal when MAM is used. In this example, with BCs 6 + 7 < 15, class 1 and class 2 traffic is effectively being served by separate pools. Therefore, class 2 sees no preemption, and only class 3 is being preempted whenever necessary. This fact is confirmed by the Erlang loss formula: a load of 2.7 erlangs offered to 6 servers sees a 0.03692 blocking, and a load of 3.5 erlangs offered to 7 servers sees a 0.03961 blocking. These blocking probabilities are exactly the same as the corresponding entries in Table 1: PB1 and PB2 for MAM.
2. 当使用MAM时,当1级流量高于正常值时,2级的性能不会下降。在本例中,当BCs 6+7<15时,1级和2级流量有效地由单独的池提供服务。因此,类2看不到抢占,只有类3在必要时被抢占。Erlang损耗公式证实了这一事实:提供给6台服务器的2.7个Erlang负载将导致0.03692阻塞,提供给7台服务器的3.5个Erlang负载将导致0.03961阻塞。这些阻塞概率与表1中的相应条目完全相同:MAM的PB1和PB2。
3. This is not the case in RDM. Here, the probability for class 2 to be preempted by class 1 is nonzero because of two effects. (1) Through the cascaded bandwidth arrangement, class 3 is protected
3. RDM中的情况并非如此。这里,由于两种影响,类别2被类别1抢占的概率为非零。(1) 通过级联带宽安排,3级受到保护
somewhat from preemption. (2) Class 2 traffic is sharing a BC with class 1. Consequently, class 2 suffers when class 1 traffic increases.
有点先发制人。(2) 2级流量与1级流量共享一个BC。因此,当1级交通量增加时,2级受到影响。
Thus, it appears that although the cascaded bandwidth arrangement and the resulting bandwidth sharing makes RDM work better under normal conditions, such interaction makes it less effective to provide class isolation under overload conditions.
因此,看起来虽然级联带宽安排和由此产生的带宽共享使得RDM在正常条件下工作得更好,但是这种交互使得在过载条件下提供类隔离的效率更低。
We now consider a scenario in which the service requirement is to give better blocking/preemption performance to class 2 than to class 3, while maintaining class 1 performance at the same level as in the previous scenario. (The use of minimum deterministic guarantee for class 3 is to be considered in the next section.) So that the specified class 2 performance objective can be met, class 2 BC is increased appropriately. As an example, BCs (6, 9, 15) are now used for MAM, and (6, 13, 15) for RDM. For both BCMs, as shown in Figures 1bis and 2bis, although class 1 performance remains unchanged, class 2 now receives better performance, at the expense of class 3. This is of course due to the increased access of bandwidth by class 2 over class 3. Under normal conditions, the performance of the two BCMs is similar in terms of their blocking and preemption probabilities for LSP setup requests, as shown in Table 2.
现在我们考虑一种场景,其中服务需求是给类2提供比类3更好的阻塞/抢占性能,同时保持类1的性能与先前场景相同。(下一节将考虑对3级使用最低确定性保证。)为了满足规定的2级性能目标,适当增加2级BC。例如,BCs(6,9,15)现在用于MAM,而(6,13,15)用于RDM。对于这两种BCM,如图1bis和2bis所示,尽管1级性能保持不变,但2级现在的性能更好,而以3级为代价。当然,这是由于类2对带宽的访问比类3要多。在正常情况下,两个BCM在LSP设置请求的阻塞和抢占概率方面的性能相似,如表2所示。
Table 2. Blocking and preemption probabilities
表2。阻塞和抢占概率
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
MAM 0.03692 0.00658 0.02733 0 0.02709 0.00658 0.05441 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195
MAM 0.03692 0.00658 0.02733 0.02709 0.00658 0.05441 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195
Under overload, the observations in Section 4.1 regarding the difference in the general behavior between the two BCMs still apply, as shown in Figures 1bis and 2bis.
在过载情况下,第4.1节中关于两个BCM之间一般行为差异的观察结果仍然适用,如图1bis和2bis所示。
The following are two frequently asked questions about the operation of BCMs.
以下是有关BCMs操作的两个常见问题。
(1) For a link capacity of 15, would a class 1 BC of 6 and a class 2 BC of 9 in MAM result in the possibility of a total lockout for class 3?
(1) 对于15的链路容量,MAM中的1级BC为6,2级BC为9是否会导致3级完全锁定?
This will certainly be the case when there are 6 class 1 and 9 class 2 LSPs being established simultaneously. Such an offered load (with 6 class 1 and 9 class 2 LSP requests) will not cause a lockout of class 3 with RDM having a BC of 13 for classes 1 and 2 combined, but
当同时建立6个1类和9个2类LSP时,肯定会出现这种情况。这样提供的负载(6个class 1和9个class 2 LSP请求)不会导致class 3的锁定,RDM的BC为13,用于class 1和class 2的组合,但是
will result in class 2 LSPs being rejected. If class 2 traffic were considered relatively more important than class 3 traffic, then RDM would perform very poorly compared to MAM with BCs of (6, 9, 15).
将导致2级LSP被拒绝。如果认为第2类流量比第3类流量更重要,那么RDM的性能将比BCs为(6、9、15)的MAM差得多。
(2) Should MAM with BCs of (6, 7, 15) be used instead so as to make the performance of RDM look comparable?
(2) 是否应该使用BCs为(6、7、15)的MAM,以使RDM的性能具有可比性?
The answer is that the above scenario is not very realistic when the offered load is assumed to be (2.7, 3.5, 3.5) for the three classes, as stated in Section 3.2. Treating an overload of (6, 9, x) as a normal operating condition is incompatible with the engineering of BCs according to needed bandwidth from different classes. It would be rare for a given class to need so much more than its engineered bandwidth level. But if the class did, the expectation based on design and normal traffic fluctuations is that this class would quickly release unneeded bandwidth toward its engineered level, freeing up bandwidth for other classes.
答案是,如第3.2节所述,当三个等级的提供负荷假设为(2.7、3.5、3.5)时,上述情景不太现实。将(6,9,x)过载视为正常工作条件,与根据不同类别所需带宽设计BCs不兼容。对于一个给定的类来说,很少有比其设计的带宽级别需要更多的带宽。但是如果这个类真的这样做了,基于设计和正常流量波动的预期是,这个类将迅速释放不需要的带宽,达到其工程级别,为其他类释放带宽。
Service providers engineer their networks based on traffic projections to determine network configurations and needed capacity. All BCMs should be designed to operate under realistic network conditions. For any BCM to work properly, the selection of values for different BCs must therefore be based on the projected bandwidth needs of each class, as well as on the bandwidth allocation rules of the BCM itself. This is to ensure that the BCM works as expected under the intended design conditions. In operation, the actual load may well turn out to be different from that of the design. Thus, an assessment of the performance of a BCM under overload is essential to see how well the BCM can cope with traffic surges or network failures. Reflecting this view, the basis for comparison of two BCMs is that they meet the same or similar performance requirements under normal conditions, and how they withstand overload.
服务提供商根据流量预测设计网络,以确定网络配置和所需容量。所有BCM应设计为在实际网络条件下运行。因此,为了使任何BCM正常工作,必须根据每个类别的预计带宽需求以及BCM自身的带宽分配规则来选择不同BC的值。这是为了确保车身控制模块在预期设计条件下按预期工作。在运行中,实际荷载很可能与设计荷载不同。因此,必须对车身控制模块在过载情况下的性能进行评估,以了解车身控制模块如何应对交通流量激增或网络故障。反映这一观点,比较两种BCM的基础是,它们在正常条件下满足相同或相似的性能要求,以及它们如何承受过载。
In operational practice, load measurement and forecast would be useful to calibrate and fine-tune the BCs so that traffic from different classes could be redistributed accordingly. Dynamic adjustment of the Diffserv scheduler could also be used to minimize QoS degradation.
在运行实践中,负荷测量和预测将有助于校准和微调BCs,从而使不同等级的流量能够相应地重新分配。Diffserv调度器的动态调整也可用于最小化QoS降级。
As is pointed out in Section 3.2, the higher degree of sharing among the different classes in RDM means that the numerical values of the BCs could be relatively smaller than those for MAM. We now examine this aspect in more detail by considering the following scenario. We set the BCs so that (1) for both BCMs, the same value is used for class 1, (2) the same minimum deterministic guarantee of bandwidth for class 3 is offered by both BCMs, and (3) the blocking/preemption
如第3.2节所述,RDM中不同类别之间的共享程度较高,这意味着BCs的数值可能相对小于MAM的数值。我们现在通过考虑以下场景来更详细地研究这方面。我们设置BCs,以便(1)对于两个BCM,相同的值用于类别1,(2)两个BCM为类别3提供相同的带宽最小确定性保证,以及(3)阻塞/抢占
probability is minimized for class 2. We want to emphasize that this may not be the way service providers select BCs. It is done here to investigate the statistical behavior of such a deterministic mechanism.
2类的概率最小。我们想强调的是,这可能不是服务提供商选择BCs的方式。本文旨在研究这种确定性机制的统计行为。
For illustration, we use BCs (6, 7, 15) for MAM, and (6, 13, 15) for RDM. In this case, both BCMs have 13 units of bandwidth for classes 1 and 2 together, and dedicate 2 units of bandwidth for use by class 3 only. The performance of the two BCMs under normal conditions is shown in Table 3. It is clear that MAM with (6, 7, 15) gives fairly comparable performance objectives across the three classes, whereas RDM with (6, 13, 15) strongly favors class 2 at the expense of class 3. They therefore cater to different service requirements.
为了便于说明,我们将BCs(6,7,15)用于MAM,将BCs(6,13,15)用于RDM。在这种情况下,两个BCM对于类别1和类别2一起具有13个带宽单位,并且仅将2个带宽单位专用于类别3。两个BCM在正常条件下的性能如表3所示。很明显,带有(6,7,15)的MAM在三个类别中给出了相当可比的性能目标,而带有(6,13,15)的RDM强烈支持类别2,而不是类别3。因此,它们满足不同的服务要求。
Table 3. Blocking and preemption probabilities
表3。阻塞和抢占概率
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
MAM 0.03692 0.03961 0.02384 0 0.02275 0.03961 0.04659 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195
MAM 0.03692 0.03961 0.02384 0.02275 0.03961 0.04659 RDM 0.03692 0.00449 0.02759 0.00272 0.02436 0.00721 0.05195
By comparing Figures 1 and 2bis, it can be seen that, when being subjected to the same set of BCs, RDM gives class 2 much better performance than MAM, with class 3 being only slightly worse.
通过比较图1和图2BI,可以看出,当使用同一组BCs时,RDM的第2类性能要比MAM好得多,第3类性能稍差。
This confirms the observation in Section 3.2 that, when the same service requirements under normal conditions are to be met, the numerical values of the BCs for RDM can be relatively smaller than those for MAM. This should not be surprising in view of the hard boundary (B3 = Nmax) in RDM versus the soft boundary (B1+B2+B3 >= Nmax) in MAM. The strict ordering of BCs (B1 < B2 < B3) gives RDM the advantage of a higher degree of sharing among the different classes; i.e., the ability to reallocate the unused bandwidth of higher-priority classes to lower-priority ones, if needed. Consequently, this leads to better performance when an identical set of BCs is used as exemplified above. Such a higher degree of sharing may necessitate the use of minimum deterministic bandwidth guarantee to offer some protection for lower-priority traffic from preemption. The explicit lack of ordering of BCs in MAM and its soft boundary imply that the use of minimum deterministic guarantees for lower-priority classes may not need to be enforced when there is a lesser degree of sharing. This is demonstrated by the example in Section 4.2 with BCs (6, 9, 15) for MAM.
这证实了第3.2节中的观察结果,即当满足正常条件下的相同服务要求时,RDM的BCs数值可以相对小于MAM的数值。考虑到RDM中的硬边界(B3=Nmax)和MAM中的软边界(B1+B2+B3>=Nmax),这并不奇怪。BCs的严格排序(B1<B2<B3)使RDM具有不同类之间更高程度共享的优势;i、 例如,如果需要,将高优先级类别的未使用带宽重新分配给低优先级类别的能力。因此,当如上所示使用相同的bc集时,这将导致更好的性能。这种更高程度的共享可能需要使用最小确定性带宽保证来为低优先级流量提供一些保护,防止抢占。MAM及其软边界中BCs的显式缺乏排序意味着,当共享程度较低时,可能不需要对低优先级类使用最小确定性保证。第4.2节中的示例说明了这一点,其中BCs(6、9、15)用于MAM。
For illustration, Table 4 shows the performance under normal conditions of RDM with BCs (6, 15, 15).
为了便于说明,表4显示了在正常条件下RDM与BCs(6、15、15)的性能。
Table 4. Blocking and preemption probabilities
表4。阻塞和抢占概率
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
BCM PB1 PB2 PB3 PP2 PP3 PB2+PP2 PB3+PP3
RDM 0.03692 0.00060 0.02800 0.00032 0.02740 0.00092 0.05540
RDM 0.03692 0.00060 0.02800 0.00032 0.02740 0.00092 0.05540
Regardless of whether deterministic guarantees are used, both BCMs are bounded by the same aggregate constraint of the link capacity. Also, in both BCMs, bandwidth access guarantees are necessarily achieved statistically because of traffic fluctuations, as explained in Section 4.2. (As a result, service-level objectives are typically specified as monthly averages, under the use of statistical guarantees rather than deterministic guarantees.) Thus, given the fundamentally different operating principles of the two BCMs (ordering, hard versus soft boundary), the dimensions of one BCM should not be adopted to design for the other. Rather, it is the service requirements, and perhaps also the operational needs, of a service provider that should be used to drive how the BCs of a BCM are selected.
无论是否使用确定性保证,两个BCM都受到链路容量的相同聚合约束的限制。此外,如第4.2节所述,在两种BCM中,由于流量波动,必须在统计上实现带宽访问保证。(因此,在使用统计保证而非确定性保证的情况下,服务水平目标通常指定为月平均值。)因此,鉴于两种BCM的基本不同的操作原理(排序、硬边界和软边界),一个车身控制模块的尺寸不应用于另一个车身控制模块的设计。相反,应该使用服务提供商的服务要求,或许还包括运营需求来推动BCM的业务连续性的选择。
In the previous two sections, preemption is fully enabled in the sense that class 1 can preempt class 3 or class 2 (in that order), and class 2 can preempt class 3. That is, both classes 1 and 2 are preemptor-enabled, whereas classes 2 and 3 are preemptable. A class that is preemptor-enabled can preempt lower-priority classes designated as preemptable. A class not designated as preemptable cannot be preempted by any other classes, regardless of relative priorities.
在前两部分中,抢占是完全启用的,即类1可以抢占类3或类2(按该顺序),类2可以抢占类3。也就是说,类1和2都是可抢占的,而类2和3是可抢占的。启用抢占器的类可以抢占指定为可抢占的低优先级类。未指定为可抢占的类不能被任何其他类抢占,无论相对优先级如何。
We now consider the three cases shown in Table 5, in which preemption is only partially enabled.
现在我们考虑表5所示的三种情况,其中抢占仅部分启用。
Table 5. Partial preemption modes
表5。部分抢占模式
preemption modes preemptor-enabled preemptable
抢占模式抢占器启用可抢占
"1+2 on 3" (Fig. 3, 6) class 1, class 2 class 3 "1 on 3" (Fig. 4, 7) class 1 class 3 "1 on 2+3" (Fig. 5, 8) class 1 class 3, class 2
“1+2对3”(图3、6)一级、二级、三级“1对三”(图4、7)一级、三级“1对2+3”(图5、8)一级、三级、二级
In this section, we evaluate how these preemption modes affect the performance of a particular BCM. Thus, we are comparing how a given BCM performs when preemption is fully enabled versus how the same BCM performs when preemption is partially enabled. The performance of these preemption modes is shown in Figures 3 to 5 for RDM, and in Figures 6 through 8 for MAM, respectively. In all of these figures,
在本节中,我们将评估这些抢占模式如何影响特定BCM的性能。因此,我们比较了一个给定BCM在完全启用抢占时的执行情况,以及同一BCM在部分启用抢占时的执行情况。对于RDM,这些抢占模式的性能分别如图3至图5所示,对于MAM,这些抢占模式的性能分别如图6至图8所示。在所有这些数字中,
the BCs of Section 3.2 are used for illustration; i.e., (6, 7, 15) for MAM and (6, 11, 15) for RDM. However, the general behavior is similar when the BCs are changed to those in Sections 4.2 and 4.3; i.e., (6, 9, 15) and (6, 13, 15), respectively.
第3.2节的BCs用于说明;i、 e.,(6,7,15)表示MAM,而(6,11,15)表示RDM。但是,当BCs更改为第4.2节和第4.3节中的BCs时,一般行为类似;i、 分别为(6,9,15)和(6,13,15)。
Let us first examine the performance under RDM. There are two sets of results, depending on whether class 2 is preemptable: (1) Figures 3 and 4 for the two modes when only class 3 is preemptable, and (2) Figure 2 in the previous section and Figure 5 for the two modes when both classes 2 and 3 are preemptable. By comparing these two sets of results, the following impacts can be observed. Specifically, when class 2 is non-preemptable, the behavior of each class is as follows:
让我们首先检查RDM下的性能。根据类别2是否可抢占,有两组结果:(1)图3和图4适用于只有类别3可抢占的两种模式,以及(2)前一节中的图2和图5适用于类别2和3均可抢占的两种模式。通过比较这两组结果,可以观察到以下影响。具体来说,当类2不可抢占时,每个类的行为如下所示:
1. Class 1 generally sees a higher blocking probability. As the class 1 space allocated by the class 1 BC is shared with class 2, which is now non-preemptable, class 1 cannot reclaim any such space occupied by class 2 when needed. Also, class 1 has less opportunity to preempt, as it is able to preempt class 3 only.
1. 类别1通常具有较高的阻塞概率。由于class 1 BC分配的class 1空间与class 2共享(现在不可抢占),因此class 1无法在需要时回收class 2占用的任何此类空间。另外,类1抢占的机会较少,因为它只能抢占类3。
2. Class 3 also sees higher blocking/preemption when its own load is increased, as it is being preempted more frequently by class 1, when class 1 cannot preempt class 2. (See the last set of four points in the series for class 3 shown in Figures 3 and 4, when comparing with Figures 2 and 5.)
2. 当类3自身的负载增加时,它也会看到更高的阻塞/抢占,因为当类1无法抢占类2时,它会更频繁地被类1抢占。(与图2和图5进行比较时,参见图3和图4所示的类别3系列中最后一组四点。)
3. Class 2 blocking/preemption is reduced even when its own load is increased, since it is not being preempted by class 1. (See the middle set of four points in the series for class 2 shown in Figures 3 and 4, when comparing with Figures 2 and 5.)
3. 第2类阻塞/抢占即使在其自身负载增加时也会减少,因为它没有被第1类抢占。(与图2和图5相比,参见图3和图4所示的类别2系列中的中间四个点。)
Another two sets of results are related to whether class 2 is preemptor-enabled. In this case, when class 2 is not preemptor-enabled, class 2 blocking/preemption is increased when class 3 load is increased. (See the last set of four points in the series for class 2 shown in Figures 4 and 5, when comparing with Figures 2 and 3.) This is because both classes 2 and 3 are now competing independently with each other for resources.
另外两组结果与类2是否启用了抢占器有关。在这种情况下,当2级未启用抢占器时,当3级负载增加时,2级阻塞/抢占会增加。(与图2和图3相比,参见图4和图5中所示的类别2系列中最后一组四点。)这是因为类别2和类别3现在都在相互独立地竞争资源。
Turning now to MAM, the significant impact appears to be only on class 2, when it cannot preempt class 3, thereby causing its blocking/preemption to increase in two situations.
现在转到MAM,重大影响似乎仅出现在类别2上,当它无法抢占类别3时,从而导致其阻塞/抢占在两种情况下增加。
1. When class 1 load is increased. (See the first set of four points in the series for class 2 shown in Figures 7 and 8, when comparing with Figures 1 and 6.)
1. 当1级负荷增加时。(与图1和图6相比,参见图7和图8所示的类别2系列中的第一组四点。)
2. When class 3 load is increased. (See the last set of four points in the series for class 2 shown in Figures 7 and 8, when comparing with Figures 1 and 6.) This is similar to RDM; i.e., class 2 and class 3 are now competing with each other.
2. 当3级负荷增加时。(与图1和图6比较时,参见图7和图8所示的类别2系列中最后一组四点。)这与RDM类似;i、 例如,2班和3班现在相互竞争。
When Figure 1 (for the case of fully enabled preemption) is compared to Figures 6 through 8 (for partially enabled preemption), it can be seen that the performance of MAM is relatively insensitive to the different preemption modes. This is because when each class has its own bandwidth access limits, the degree of interference among the different classes is reduced.
当将图1(对于完全启用的抢占)与图6到图8(对于部分启用的抢占)进行比较时,可以看出MAM的性能对不同的抢占模式相对不敏感。这是因为当每个类都有自己的带宽访问限制时,不同类之间的干扰程度会降低。
This is in contrast with RDM, whose behavior is more dependent on the preemption mode in use.
这与RDM相反,RDM的行为更依赖于使用的抢占模式。
This section covers the case in which preemption is completely disabled. We continue with the numerical example used in the previous sections, with the same link capacity and offered load.
本节介绍完全禁用抢占的情况。我们继续前面章节中使用的数值示例,使用相同的链路容量和提供的负载。
For RDM, we consider two different settings:
对于RDM,我们考虑两种不同的设置:
"Russian Dolls (1)" BCs:
“俄罗斯娃娃(1)”BCs:
up to 6 simultaneous LSPs for class 1 by itself, up to 11 simultaneous LSPs for classes 1 and 2 together, and up to 15 simultaneous LSPs for all three classes together.
类1本身最多可同时使用6个LSP,类1和类2最多可同时使用11个LSP,所有三个类最多可同时使用15个LSP。
"Russian Dolls (2)" BCs:
“俄罗斯娃娃(2)”BCs:
up to 9 simultaneous LSPs for class 3 by itself, up to 14 simultaneous LSPs for classes 3 and 2 together, and up to 15 simultaneous LSPs for all three classes together.
3类本身最多可同时使用9个LSP,3类和2类同时使用14个LSP,三类同时使用15个LSP。
Note that the "Russian Dolls (1)" set of BCs is the same as previously with preemption enabled, whereas the "Russian Dolls (2)" has the cascade of bandwidth arranged in reverse order of the classes.
请注意,“俄罗斯娃娃(1)”的BCs集与之前启用抢占的BCs集相同,而“俄罗斯娃娃(2)”的带宽级联排列顺序与类别相反。
As observed in Section 4, the cascaded bandwidth arrangement is intended to offer lower-priority traffic some protection from preemption by higher-priority traffic. This is to avoid starvation. In a pure blocking environment, such protection is no longer necessary. As depicted in Figure 9, it actually produces the opposite, undesirable effect: higher-priority traffic sees higher blocking than lower-priority traffic. With no preemption, higher-priority traffic should be protected instead to ensure that it could get through when under high load. Indeed, when the reverse cascade is used in "Russian Dolls (2)", the required performance of lower blocking for higher-priority traffic is achieved, as shown in Figure 10. In this specific example, there is very little difference among the performance of the three classes in the first eight data points for each of the three series. However, the BCs can be tuned to get a bigger differentiation.
如第4节所述,级联带宽安排旨在为低优先级流量提供一些保护,防止高优先级流量抢占。这是为了避免饥饿。在纯阻塞环境中,不再需要这样的保护。如图9所示,它实际上产生了相反的不良效果:高优先级流量比低优先级流量看到更高的阻塞。在没有抢占的情况下,更高优先级的流量应该得到保护,以确保在高负载下能够通过。事实上,当在“俄罗斯玩偶(2)”中使用反向级联时,对于更高优先级的流量,实现了所需的低阻塞性能,如图10所示。在这个具体的示例中,三个系列的前八个数据点中的三个类的性能几乎没有差异。但是,可以调整BCs以获得更大的差异化。
For MAM, we also consider two different settings:
对于MAM,我们也考虑两种不同的设置:
"Exp. Max. Alloc. (1)" BCs:
“Exp.Max.Alloc.(1)”BCs:
up to 7 simultaneous LSPs for class 1, up to 8 simultaneous LSPs for class 2, and up to 8 simultaneous LSPs for class 3.
1类最多7个同步LSP,2类最多8个同步LSP,3类最多8个同步LSP。
"Exp. Max. Alloc. (2)" BCs:
“Exp.Max.Alloc.(2)”BCs:
up to 7 simultaneous LSPs for class 1, with additional bandwidth for 1 LSP privately reserved up to 8 simultaneous LSPs for class 2, and up to 8 simultaneous LSPs for class 3.
1类最多7个同步LSP,1个LSP的额外带宽私人保留2类最多8个同步LSP,3类最多8个同步LSP。
These BCs are chosen so that, under normal conditions, the blocking performance is similar to all the previous scenarios. The only difference between these two sets of values is that the "Exp. Max. Alloc. (2)" algorithm gives class 1 a private pool of 1 server for class protection. As a result, class 1 has a relatively lower blocking especially when its traffic is above normal, as can be seen by comparing Figures 11 and 12. This comes, of course, with a slight increase in the blocking of classes 2 and 3 traffic.
选择这些BCs是为了在正常情况下,阻塞性能与之前的所有场景类似。这两组值之间的唯一区别是“Exp.Max.Alloc.(2)”算法为类1提供了一个由1台服务器组成的私有池,用于类保护。因此,1类具有相对较低的阻塞,尤其是当其流量高于正常值时,如图11和图12所示。当然,这会稍微增加2级和3级流量的阻塞。
When comparing the "Russian Dolls (2)" in Figure 10 with MAM in Figures 11 or 12, the difference between their behavior and the associated explanation are again similar to the case when preemption is used. The higher degree of sharing in the cascaded bandwidth arrangement of RDM leads to a tighter coupling between the different classes of traffic when under overload. Their performance therefore
当将图10中的“俄罗斯娃娃(2)”与图11或图12中的MAM进行比较时,它们的行为和相关解释之间的差异再次类似于使用优先购买权的情况。RDM级联带宽安排中的更高共享度导致过载时不同类别的流量之间的更紧密耦合。因此,他们的表现
tends to degrade together when the load of any one class is increased. By imposing explicit maximum bandwidth usage on each class individually, better class isolation is achieved. The trade-off is that, generally, blocking performance in MAM is somewhat higher than in RDM, because of reduced sharing.
当任何一个类的负载增加时,都会一起降级。通过对每个类单独施加显式的最大带宽使用,可以实现更好的类隔离。权衡的结果是,由于共享减少,MAM中的阻塞性能通常比RDM中的要高。
The difference in the behavior of RDM with or without preemption has already been discussed at the beginning of this section. For MAM, some notable differences can also be observed from a comparison of Figures 1 and 11. If preemption is used, higher-priority traffic tends to be able to maintain its performance despite the overloading of other classes. This is not so if preemption is not allowed. The trade-off is that, generally, the overloaded class sees a relatively higher blocking/preemption when preemption is enabled than there would be if preemption is disabled.
本节开头已经讨论了RDM在具有或不具有抢占的情况下的行为差异。对于MAM,从图1和图11的比较中也可以观察到一些显著的差异。如果使用抢占,高优先级的流量往往能够在其他类过载的情况下保持其性能。如果不允许抢占,则情况并非如此。权衡的结果是,一般来说,启用抢占时重载类看到的阻塞/抢占比禁用抢占时看到的阻塞/抢占要高。
As observed towards the end of Section 3, the partitioning of bandwidth capacity for access by different traffic classes tends to reduce the maximum link efficiency achievable. We now consider the case where there is no such partitioning, thereby resulting in full sharing of the total bandwidth among all the classes. This is referred to as the Complete Sharing Model.
正如在第3节末尾所观察到的,按不同业务类别划分接入的带宽容量往往会降低可实现的最大链路效率。我们现在考虑的情况下,没有这样的分区,从而导致在所有类之间的总带宽的完全共享。这被称为完全共享模型。
For MAM, this means that the BCs are such that up to 15 simultaneous LSPs are allowed for any class.
对于MAM,这意味着BCs允许对任何类同时使用多达15个LSP。
Similarly, for RDM, the BCs are
类似地,对于RDM,BCs是
up to 15 simultaneous LSPs for class 1 by itself, up to 15 simultaneous LSPs for classes 1 and 2 together, and up to 15 simultaneous LSPs for all three classes together.
类1本身最多可同时使用15个LSP,类1和类2一起最多可同时使用15个LSP,所有三个类一起最多可同时使用15个LSP。
Effectively, there is now no distinction between MAM and RDM. Figure 13 shows the performance when all classes have equal access to link bandwidth under Complete Sharing.
实际上,现在MAM和RDM之间没有区别。图13显示了在完全共享的情况下,当所有类对链路带宽具有同等访问权限时的性能。
With preemption being fully enabled, class 1 sees virtually no blocking, regardless of the loading conditions of the link. Since class 2 can only preempt class 3, class 2 sees some blocking and/or preemption when either class 1 load or its own load is above normal; otherwise, class 2 is unaffected by increases of class 3 load. As higher priority classes always preempt class 3 when the link is full, class 3 suffers the most, with high blocking/preemption when there is any load increase from any class. A comparison of Figures 1, 2, and 13 shows that, although the performance of both classes 1 and 2 is far superior under Complete Sharing, class 3 performance is much
在完全启用抢占的情况下,无论链路的加载条件如何,类1几乎看不到阻塞。由于类2只能抢占类3,当类1负载或其自身负载高于正常值时,类2会看到一些阻塞和/或抢占;否则,等级2不受等级3荷载增加的影响。由于较高优先级的类在链路满时总是抢占第3类,因此第3类受到的影响最大,当任何类的负载增加时,阻塞/抢占率都很高。图1、图2和图13的比较表明,尽管在完全共享的情况下,类1和类2的性能都要高得多,但类3的性能要高得多
better off under either MAM or RDM. In a sense, class 3 is starved under overload as no protection of its traffic is being provided under Complete Sharing.
最好采用MAM或RDM。从某种意义上说,3类在过载情况下处于饥饿状态,因为在完全共享的情况下没有提供对其流量的保护。
Based on the previous results, a general theme is shown to be the trade-off between bandwidth sharing and class protection/isolation. To show this more concretely, let us compare the different BCMs in terms of the overall loss probability. This quantity is defined as the long-term proportion of LSP requests from all classes combined that are lost as a result of either blocking or preemption, for a given level of offered load.
根据前面的结果,一个普遍的主题是带宽共享和类保护/隔离之间的权衡。为了更具体地说明这一点,让我们从总体损失概率的角度比较不同的BCM。该数量定义为在给定的负载水平下,由于阻塞或抢占而丢失的来自所有类的LSP请求的长期比例。
As noted in the previous sections, although RDM has a higher degree of sharing than MAM, both ultimately converge to the Complete Sharing Model as the degree of sharing in each of them is increased. Figure 14 shows that, for a single link, the overall loss probability is the smallest under Complete Sharing and the largest under MAM, with that under RDM being intermediate. Expressed differently, Complete Sharing yields the highest link efficiency and MAM the lowest. As a matter of fact, the overall loss probability of Complete Sharing is identical to the loss probability of a single class as computed by the Erlang loss formula. Yet Complete Sharing has the poorest class protection capability. (Note that, in a network with many links and multiple-link routing paths, analysis in [6] showed that Complete Sharing does not necessarily lead to maximum network-wide bandwidth efficiency.)
如前几节所述,尽管RDM的共享程度高于MAM,但随着每种共享程度的增加,它们最终都会收敛到完整的共享模型。图14显示,对于单个链路,完全共享下的总体丢失概率最小,MAM下的总体丢失概率最大,RDM下的总体丢失概率居中。换句话说,完全共享产生的链路效率最高,MAM最低。事实上,完全共享的总体损失概率与由Erlang损失公式计算的单个类别的损失概率相同。然而,完全共享具有最差的等级保护能力。(请注意,在具有多条链路和多条链路路由路径的网络中,[6]中的分析表明,完全共享并不一定会带来最大的网络带宽效率。)
Increasing the degree of bandwidth sharing among the different traffic classes helps increase link efficiency. Such increase, however, will lead to a tighter coupling between different classes. Under normal loading conditions, proper dimensioning of the link so that there is adequate capacity for each class can minimize the effect of such coupling. Under overload conditions, when there is a scarcity of capacity, such coupling will be unavoidable and can cause severe degradation of service to the lower-priority classes. Thus, the objective of maximizing link usage as stated in criterion (5) of Section 1 must be exercised with care, with due consideration to the effect of interactions among the different classes. Otherwise, use of this criterion alone will lead to the selection of the Complete Sharing Model, as shown in Figure 14.
增加不同流量类别之间的带宽共享程度有助于提高链路效率。然而,这样的增加将导致不同类之间更紧密的耦合。在正常荷载条件下,适当的连杆尺寸,使每类连杆具有足够的承载能力,可将此类耦合的影响降至最低。在过载条件下,当容量不足时,这种耦合将不可避免,并可能导致低优先级等级的服务严重降级。因此,第1节标准(5)中规定的最大化链路使用的目标必须谨慎执行,并适当考虑不同类别之间交互的影响。否则,仅使用此标准将导致选择完整的共享模型,如图14所示。
The intention of criterion (2) in judging the effectiveness of different BCMs is to evaluate how they help the network achieve the expected performance. This can be expressed in terms of the blocking and/or preemption behavior as seen by different classes under various loading conditions. For example, the relative strength of a BCM can
判断不同BCM有效性的标准(2)旨在评估它们如何帮助网络实现预期性能。这可以用不同类在不同加载条件下看到的阻塞和/或抢占行为来表示。例如,车身控制模块的相对强度可以
be demonstrated by examining how many times the per-class blocking or preemption probability under overload is worse than the corresponding probability under normal load.
通过检查过载下每类阻塞或抢占概率比正常负载下相应概率差多少倍来证明。
BCMs are used in DS-TE for path computation and admission control of LSPs by enforcing different BCs for different classes of traffic so that Diffserv QoS performance can be maximized. Therefore, it is of interest to measure the performance of a BCM by the LSP blocking/preemption probabilities under various operational conditions. Based on this, the performance of RDM and MAM for LSP establishment has been analyzed and compared. In particular, three different scenarios have been examined: (1) all three classes have comparable performance objectives in terms of LSP blocking/preemption under normal conditions, (2) class 2 is given better performance at the expense of class 3, and (3) class 3 receives some minimum deterministic guarantee.
在DS-TE中,BCM用于路径计算和LSP的准入控制,通过对不同类别的业务强制执行不同的BCs,从而使Diffserv QoS性能最大化。因此,在各种运行条件下,通过LSP阻塞/抢占概率来衡量BCM的性能是很有意义的。在此基础上,分析比较了RDM和MAM在LSP建立中的性能。特别是,已经检查了三种不同的场景:(1)在正常条件下,所有三类在LSP阻塞/抢占方面都具有可比的性能目标;(2)以牺牲第3类为代价,第2类获得了更好的性能;(3)第3类获得了一些最低确定性保证。
A general theme is the trade-off between bandwidth sharing to achieve greater efficiency under normal conditions, and to achieve robust class protection/isolation under overload. The general properties of the two BCMs are as follows:
一个普遍的主题是在正常情况下实现更高效率的带宽共享和过载情况下实现健壮的类保护/隔离之间进行权衡。这两种BCM的一般特性如下:
RDM
RDM
- allows greater sharing of bandwidth among different classes
- 允许不同类别之间更大的带宽共享
- performs somewhat better under normal conditions
- 在正常情况下性能稍好一些
- works well when preemption is fully enabled; under partial preemption, not all preemption modes work equally well
- 在完全启用抢占时工作良好;在部分抢占下,并非所有抢占模式都能同样好地工作
MAM
妈妈
- does not depend on the use of preemption
- 不依赖于优先购买权的使用
- is relatively insensitive to the different preemption modes when preemption is used
- 在使用抢占时,对不同的抢占模式相对不敏感
- provides more robust class isolation under overload
- 在重载情况下提供更健壮的类隔离
Generally, the use of preemption gives higher-priority traffic some degree of immunity to the overloading of other classes. This results in a higher blocking/preemption for the overloaded class than that in a pure blocking environment.
一般来说,抢占的使用使优先级较高的流量对其他类的过载具有一定程度的免疫力。这导致重载类的阻塞/抢占率高于纯阻塞环境中的阻塞/抢占率。
This document does not introduce additional security threats beyond those described for Diffserv [10] and MPLS Traffic Engineering [11, 12, 13, 14], and the same security measures and procedures described in those documents apply here. For example, the approach for defense against theft- and denial-of-service attacks discussed in [10], which consists of the combination of traffic conditioning at Diffserv boundary nodes along with security and integrity of the network infrastructure within a Diffserv domain, may be followed when DS-TE is in use.
除Diffserv[10]和MPLS流量工程[11,12,13,14]所述的安全威胁外,本文件并未引入其他安全威胁,这些文件中所述的安全措施和程序在此也适用。例如,在使用DS-TE时,可采用[10]中讨论的防盗和拒绝服务攻击防御方法,该方法包括区分服务边界节点处的流量调节与区分服务域内网络基础设施的安全性和完整性的结合。
Also, as stated in [11], it is specifically important that manipulation of administratively configurable parameters (such as those related to DS-TE LSPs) be executed in a secure manner by authorized entities. For example, as preemption is an administratively configurable parameter, it is critical that its values be set properly throughout the network. Any misconfiguration in any label switch may cause new LSP setup requests either to be blocked or to unnecessarily preempt LSPs already established. Similarly, the preemption values of LSP setup requests must be configured properly; otherwise, they may affect the operation of existing LSPs.
此外,如[11]中所述,授权实体以安全的方式执行管理可配置参数(如与DS-TE LSP相关的参数)的操作尤为重要。例如,由于抢占是一个管理上可配置的参数,因此在整个网络中正确设置其值至关重要。任何标签交换机中的任何错误配置都可能导致新的LSP设置请求被阻止或不必要地抢占已建立的LSP。同样,必须正确配置LSP设置请求的抢占值;否则,它们可能会影响现有LSP的运行。
Inputs from Jerry Ash, Jim Boyle, Anna Charny, Sanjaya Choudhury, Dimitry Haskin, Francois Le Faucheur, Vishal Sharma, and Jing Shen are much appreciated.
非常感谢杰瑞·阿什、吉姆·博伊尔、安娜·查尼、桑贾亚·乔杜里、迪米特里·哈斯金、弗朗索瓦·勒·福彻、维沙尔·夏尔马和沈静的投入。
[1] Le Faucheur, F. and W. Lai, "Requirements for Support of Differentiated Services-aware MPLS Traffic Engineering", RFC 3564, July 2003.
[1] Le Faucheur,F.和W.Lai,“支持区分服务感知MPLS流量工程的要求”,RFC 3564,2003年7月。
[2] Le Faucheur, F., Ed., "Protocol Extensions for Support of Diffserv-aware MPLS Traffic Engineering", RFC 4124, June 2005.
[2] Le Faucheur,F.,编辑,“支持区分服务感知MPLS流量工程的协议扩展”,RFC 41242005年6月。
[3] Boyle, J., Gill, V., Hannan, A., Cooper, D., Awduche, D., Christian, B., and W. Lai, "Applicability Statement for Traffic Engineering with MPLS", RFC 3346, August 2002.
[3] Boyle,J.,Gill,V.,Hannan,A.,Cooper,D.,Awduche,D.,Christian,B.,和W.Lai,“MPLS流量工程的适用性声明”,RFC 3346,2002年8月。
[4] Le Faucheur, F. and W. Lai, "Maximum Allocation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering", RFC 4125, June 2005.
[4] Le Faucheur,F.和W.Lai,“区分服务感知MPLS流量工程的最大分配带宽约束模型”,RFC 41252005年6月。
[5] Le Faucheur, F., Ed., "Russian Dolls Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering", RFC 4127, June 2005.
[5] Le Faucheur,F.,Ed.“区分服务感知MPLS流量工程的俄罗斯玩偶带宽约束模型”,RFC 4127,2005年6月。
[6] Ash, J., "Max Allocation with Reservation Bandwidth Constraint Model for MPLS/DiffServ TE & Performance Comparisons", RFC 4126, June 2005.
[6] Ash,J.,“MPLS/DiffServ TE的保留带宽约束最大分配模型与性能比较”,RFC 4126,2005年6月。
[7] F. Le Faucheur, "Considerations on Bandwidth Constraints Models for DS-TE", Work in Progress.
[7] F.Le Faucheur,“DS-TE带宽约束模型的考虑”,正在进行中。
[8] W.S. Lai, "Traffic Engineering for MPLS," Internet Performance and Control of Network Systems III Conference, SPIE Proceedings Vol. 4865, Boston, Massachusetts, USA, 30-31 July 2002, pp. 256-267.
[8] W.S.Lai,“MPLS流量工程”,网络系统的互联网性能和控制第三届会议,SPIE会议录第4865卷,美国马萨诸塞州波士顿,2002年7月30日至31日,第256-267页。
[9] W.S. Lai, "Traffic Measurement for Dimensioning and Control of IP Networks," Internet Performance and Control of Network Systems II Conference, SPIE Proceedings Vol. 4523, Denver, Colorado, USA, 21-22 August 2001, pp. 359-367.
[9] W.S.Lai,“IP网络尺寸和控制的流量测量”,网络系统的互联网性能和控制第二届会议,SPIE会议录第4523卷,美国科罗拉多州丹佛,2001年8月21-22日,第359-367页。
[10] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., and W. Weiss, "An Architecture for Differentiated Service", RFC 2475, December 1998.
[10] Blake,S.,Black,D.,Carlson,M.,Davies,E.,Wang,Z.,和W.Weiss,“差异化服务架构”,RFC 24751998年12月。
[11] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. McManus, "Requirements for Traffic Engineering Over MPLS", RFC 2702, September 1999.
[11] Awduche,D.,Malcolm,J.,Agogbua,J.,O'Dell,M.,和J.McManus,“MPLS上的流量工程要求”,RFC 2702,1999年9月。
[12] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 3209, December 2001.
[12] Awduche,D.,Berger,L.,Gan,D.,Li,T.,Srinivasan,V.,和G.Swallow,“RSVP-TE:LSP隧道RSVP的扩展”,RFC 3209,2001年12月。
[13] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering (TE) Extensions to OSPF Version 2", RFC 3630, September 2003.
[13] Katz,D.,Kompella,K.和D.Yaung,“OSPF版本2的交通工程(TE)扩展”,RFC 3630,2003年9月。
[14] Smit, H. and T. Li, "Intermediate System to Intermediate System (IS-IS) Extensions for Traffic Engineering (TE)", RFC 3784, June 2004.
[14] Smit,H.和T.Li,“交通工程(TE)的中间系统到中间系统(IS-IS)扩展”,RFC 3784,2004年6月。
Author's Address
作者地址
Wai Sum Lai AT&T Labs Room D5-3D18 200 Laurel Avenue Middletown, NJ 07748 USA
美国新泽西州米德尔顿劳雷尔大道200号伟森莱AT&T实验室D5-3D18室,邮编07748
Phone: +1 732-420-3712 EMail: wlai@att.com
Phone: +1 732-420-3712 EMail: wlai@att.com
Full Copyright Statement
完整版权声明
Copyright (C) The Internet Society (2005).
版权所有(C)互联网协会(2005年)。
This document is subject to the rights, licenses and restrictions contained in BCP 78 and at www.rfc-editor.org/copyright.html, and except as set forth therein, the authors retain all their rights.
本文件受BCP 78和www.rfc-editor.org/copyright.html中包含的权利、许可和限制的约束,除其中规定外,作者保留其所有权利。
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
本文件及其包含的信息是按“原样”提供的,贡献者、他/她所代表或赞助的组织(如有)、互联网协会和互联网工程任务组不承担任何明示或暗示的担保,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。
Intellectual Property
知识产权
The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79.
IETF对可能声称与本文件所述技术的实施或使用有关的任何知识产权或其他权利的有效性或范围,或此类权利下的任何许可可能或可能不可用的程度,不采取任何立场;它也不表示它已作出任何独立努力来确定任何此类权利。有关RFC文件中权利的程序信息,请参见BCP 78和BCP 79。
Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr.
向IETF秘书处披露的知识产权副本和任何许可证保证,或本规范实施者或用户试图获得使用此类专有权利的一般许可证或许可的结果,可从IETF在线知识产权存储库获取,网址为http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org.
IETF邀请任何相关方提请其注意任何版权、专利或专利申请,或其他可能涵盖实施本标准所需技术的专有权利。请将信息发送至IETF的IETF-ipr@ietf.org.
Acknowledgement
确认
Funding for the RFC Editor function is currently provided by the Internet Society.
RFC编辑功能的资金目前由互联网协会提供。