Network Working Group J. de Oliveira, Ed. Request for Comments: 4829 Drexel University Category: Informational JP. Vasseur, Ed. Cisco Systems, Inc. L. Chen Verizon Laboratories C. Scoglio Kansas State University April 2007
Network Working Group J. de Oliveira, Ed. Request for Comments: 4829 Drexel University Category: Informational JP. Vasseur, Ed. Cisco Systems, Inc. L. Chen Verizon Laboratories C. Scoglio Kansas State University April 2007
Label Switched Path (LSP) Preemption Policies for MPLS Traffic Engineering
MPLS流量工程中的标签交换路径(LSP)抢占策略
Status of This Memo
关于下段备忘
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
本备忘录为互联网社区提供信息。它没有规定任何类型的互联网标准。本备忘录的分发不受限制。
Copyright Notice
版权公告
Copyright (C) The IETF Trust (2007).
版权所有(C)IETF信托基金(2007年)。
IESG Note
IESG注释
This RFC is not a candidate for any level of Internet Standard. The IETF disclaims any knowledge of the fitness of this RFC for any purpose and, in particular, notes that the decision to publish is not based on IETF review for such things as security, congestion control, or inappropriate interaction with deployed protocols. The RFC Editor has chosen to publish this document at its discretion. Readers of this document should exercise caution in evaluating its value for implementation and deployment. See RFC 3932 for more information.
本RFC不适用于任何级别的互联网标准。IETF不承认本RFC适用于任何目的,尤其注意到,发布决定并非基于IETF对安全、拥塞控制或与已部署协议的不当交互等事项的审查。RFC编辑已自行决定发布本文件。本文档的读者在评估其实施和部署价值时应谨慎。有关更多信息,请参阅RFC 3932。
Abstract
摘要
When the establishment of a higher priority (Traffic Engineering Label Switched Path) TE LSP requires the preemption of a set of lower priority TE LSPs, a node has to make a local decision to select which TE LSPs will be preempted. The preempted LSPs are then rerouted by their respective Head-end Label Switch Router (LSR). This document presents a flexible policy that can be used to achieve different objectives: preempt the lowest priority LSPs; preempt the minimum number of LSPs; preempt the set of TE LSPs that provide the closest amount of bandwidth to the required bandwidth for the preempting TE LSPs (to minimize bandwidth wastage); preempt the LSPs that will have the maximum chance to get rerouted. Simulation results are given and
当建立高优先级(流量工程标签交换路径)TE-LSP需要抢占一组低优先级TE-LSP时,节点必须做出本地决策以选择将抢占的TE-LSP。抢占的LSP然后由其各自的头端标签交换路由器(LSR)重新路由。本文件提供了一种灵活的策略,可用于实现不同的目标:抢占最低优先级LSP;抢占LSP的最小数量;抢占提供最接近抢占TE LSP所需带宽的带宽量的TE LSP集(以最小化带宽浪费);抢占将有最大机会重新路由的LSP。给出了仿真结果并进行了验证
a comparison among several different policies, with respect to preemption cascading, number of preempted LSPs, priority, wasted bandwidth and blocking probability is also included.
比较了几种不同的策略,包括抢占级联、抢占LSP数量、优先级、浪费带宽和阻塞概率。
Table of Contents
目录
1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. LSP Setup Procedure and Preemption . . . . . . . . . . . . . . 5 4. Preemption Cascading . . . . . . . . . . . . . . . . . . . . . 6 5. Preemption Heuristic . . . . . . . . . . . . . . . . . . . . . 7 5.1. Preempting Resources on a Path . . . . . . . . . . . . . . 7 5.2. Preemption Heuristic Algorithm . . . . . . . . . . . . . . 8 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.1. Simple Case: Single Link . . . . . . . . . . . . . . . . . 10 6.2. Network Case . . . . . . . . . . . . . . . . . . . . . . . 12 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 9. Informative References . . . . . . . . . . . . . . . . . . . . 17
1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. LSP Setup Procedure and Preemption . . . . . . . . . . . . . . 5 4. Preemption Cascading . . . . . . . . . . . . . . . . . . . . . 6 5. Preemption Heuristic . . . . . . . . . . . . . . . . . . . . . 7 5.1. Preempting Resources on a Path . . . . . . . . . . . . . . 7 5.2. Preemption Heuristic Algorithm . . . . . . . . . . . . . . 8 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.1. Simple Case: Single Link . . . . . . . . . . . . . . . . . 10 6.2. Network Case . . . . . . . . . . . . . . . . . . . . . . . 12 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 9. Informative References . . . . . . . . . . . . . . . . . . . . 17
The IETF Traffic Engineering Working Group has defined the requirements and protocol extensions for DiffServ-aware MPLS Traffic Engineering (DS-TE) [RFC3564] [RFC4124]. Several Bandwidth Constraint models for use with DS-TE have been proposed [RFC4127] [RFC4128] [RFC4126] and their performance was analyzed with respect to the use of preemption.
IETF流量工程工作组定义了区分服务感知MPLS流量工程(DS-TE)[RFC3564][RFC4124]的要求和协议扩展。已经提出了几种用于DS-TE的带宽约束模型[RFC4127][RFC4128][RFC4126],并就抢占的使用对其性能进行了分析。
Preemption can be used as a tool to help ensure that high priority LSPs can always be routed through relatively favorable paths. Preemption can also be used to implement various prioritized access policies as well as restoration policies following fault events [RFC2702].
抢占可以作为一种工具来帮助确保高优先级LSP始终可以通过相对有利的路径进行路由。抢占还可用于实施各种优先访问策略以及故障事件后的恢复策略[RFC2702]。
Although not a mandatory attribute in the traditional IP world, preemption becomes important in networks using online, distributed Constrained Shortest Path First (CSPF) strategies for their Traffic Engineering Label Switched Path (TE LSP) path computation to limit the impact of bandwidth fragmentation. Moreover, preemption is an attractive strategy in an MPLS network in which traffic is treated in a differentiated manner and high-importance traffic may be given special treatment over lower-importance traffic [DEC-PREP, ATM-PREP]. Nevertheless, in the DS-TE approach, whose issues and requirements are discussed in [RFC3564], the preemption policy is considered an important piece on the bandwidth reservation and management puzzle, but no preemption strategy is defined. Note that preemption also plays an important role in regular MPLS Traffic Engineering environments (with a single pool of bandwidth).
在传统IP世界中,抢占虽然不是一个强制性属性,但在网络中使用在线、分布式约束最短路径优先(CSPF)策略进行流量工程标签交换路径(TE LSP)路径计算以限制带宽碎片的影响时,抢占变得非常重要。此外,在MPLS网络中,抢占是一种很有吸引力的策略,在MPLS网络中,流量以不同的方式处理,高重要性的流量可以比低重要性的流量得到特殊处理[DEC-PREP,ATM-PREP]。然而,在DS-TE方法中(其问题和要求在[RFC3564]中讨论),抢占策略被认为是带宽保留和管理难题中的一个重要部分,但没有定义抢占策略。请注意,抢占在常规MPLS流量工程环境(具有单个带宽池)中也起着重要作用。
This document proposes a flexible preemption policy that can be adjusted in order to give different weight to various preemption criteria: priority of LSPs to be preempted, number of LSPs to be preempted, amount of bandwidth preempted, blocking probability. The implications (cascading effect, bandwidth wastage, priority of preempted LSPs) of selecting a certain order of importance for the criteria are discussed for the examples given.
本文档提出了一种灵活的抢占策略,可对其进行调整,以便为各种抢占标准赋予不同的权重:要抢占的LSP优先级、要抢占的LSP数量、带宽抢占量、阻塞概率。在给出的例子中,讨论了为准则选择某种重要顺序的含义(级联效应、带宽损耗、抢占LSP的优先级)。
In [RFC2702], issues and requirements for Traffic Engineering in an MPLS network are highlighted. In order to address both traffic-oriented and resource-oriented performance objectives, the authors point out the need for priority and preemption parameters as Traffic Engineering attributes of traffic trunks. The notion of preemption and preemption priority is defined in [RFC3272], and preemption attributes are defined in [RFC2702] and [RFC3209].
[RFC2702]强调了MPLS网络中流量工程的问题和要求。为了解决面向流量和面向资源的性能目标,作者指出需要将优先级和抢占参数作为流量中继的流量工程属性。抢占和抢占优先级的概念在[RFC3272]中定义,抢占属性在[RFC2702]和[RFC3209]中定义。
A traffic trunk is defined as an aggregate of traffic flows belonging to the same class that are placed inside an LSP [RFC3564]. In this context, preemption is the act of selecting an LSP that will be removed from a given path in order to give room to another LSP with a higher priority (lower preemption number). More specifically, the preemption attributes determine whether an LSP with a certain setup preemption priority can preempt another LSP with a lower holding preemption priority from a given path, when there is competition for available resources. Note that competing for resources is one situation in which preemption can be triggered, but other situations may exist, themselves controlled by a policy.
交通干线被定义为属于LSP[RFC3564]内相同类别的交通流的集合。在此上下文中,抢占是选择将从给定路径中移除的LSP的行为,以便为具有较高优先级(较低抢占编号)的另一个LSP提供空间。更具体地说,抢占属性确定当存在对可用资源的竞争时,具有特定设置抢占优先级的LSP是否可以从给定路径抢占具有较低持有抢占优先级的另一LSP。请注意,争夺资源是一种可以触发抢占的情况,但也可能存在其他情况,这些情况本身由策略控制。
For readability, a number of definitions from [RFC3564] are repeated here:
为了便于阅读,此处重复了[RFC3564]中的许多定义:
Class-Type (CT): The set of Traffic Trunks crossing a link that is governed by a specific set of Bandwidth constraints. CT is used for the purposes of link bandwidth allocation, constraint-based routing, and admission control. A given Traffic Trunk belongs to the same CT on all links.
类别类型(CT):由一组特定的带宽限制控制的穿越链路的一组交通干线。CT用于链路带宽分配、基于约束的路由和准入控制。给定的交通干线在所有链路上属于同一个CT。
TE-Class: A pair of:
TE类:一对:
i. A Class-Type.
i. 班级类型。
ii. A preemption priority allowed for that Class-Type. This means that an LSP transporting a Traffic Trunk from that Class-Type can use that preemption priority as the set-up priority, as the holding priority, or both.
二、该类类型允许的抢占优先级。这意味着从该类类型传输业务中继的LSP可以使用该抢占优先级作为设置优先级、保持优先级,或者两者都使用。
By definition, there may be more than one TE-Class using the same CT, as long as each TE-Class uses a different preemption priority. Also, there may be more than one TE-Class with the same preemption priority, provided that each TE-Class uses a different CT. The network administrator may define the TE-Classes in order to support preemption across CTs, to avoid preemption within a certain CT, or to avoid preemption completely, when so desired. To ensure coherent operation, the same TE-Classes must be configured in every Label Switched Router (LSR) in the DS-TE domain.
根据定义,可能有多个TE类使用相同的CT,只要每个TE类使用不同的抢占优先级。此外,如果每个TE类使用不同的CT,则可能存在多个具有相同抢占优先级的TE类。网络管理员可以定义TE类,以便在需要时支持跨CT的抢占、避免特定CT内的抢占或完全避免抢占。为了确保一致的操作,必须在DS-TE域中的每个标签交换路由器(LSR)中配置相同的TE类。
As a consequence of a per-TE-Class treatment, the Interior Gateway Protocol (IGP) needs to advertise separate Traffic Engineering information for each TE-Class, which consists of the Unreserved Bandwidth (UB) information [RFC4124]. The UB information will be used by the routers, checking against the bandwidth constraint model parameters, to decide whether preemption is needed. Details on how to calculate the UB are given in [RFC4124].
作为每个TE类处理的结果,内部网关协议(IGP)需要为每个TE类发布单独的流量工程信息,其中包括无保留带宽(UB)信息[RFC4124]。路由器将使用UB信息,对照带宽约束模型参数进行检查,以决定是否需要抢占。有关如何计算UB的详细信息,请参见[RFC4124]。
A new LSP setup request has two important parameters: bandwidth and preemption priority. The set of LSPs to be preempted can be selected by optimizing an objective function that represents these two parameters, and the number of LSPs to be preempted. More specifically, the objective function could be any, or a combination, of the following [DEC-PREP, ATM-PREP]:
新的LSP设置请求有两个重要参数:带宽和抢占优先级。通过优化表示这两个参数的目标函数和要抢占的LSP数量,可以选择要抢占的LSP集。更具体地说,目标函数可以是以下[DEC-PREP,ATM-PREP]的任意或组合:
* Preempt the LSPs that have the least priority (preemption priority). The Quality of Service (QoS) of high priority traffic would be better satisfied, and the cascading effect described below can be limited.
* 抢占优先级最低的LSP(抢占优先级)。高优先级业务的服务质量(QoS)将得到更好的满足,并且下面描述的级联效应可以受到限制。
* Preempt the least number of LSPs. The number of LSPs that need to be rerouted would be lower.
* 抢占最少数量的LSP。需要重新路由的LSP数量将减少。
* Preempt the least amount of bandwidth that still satisfies the request. Resource utilization could be improved. The preemption of larger TE LSPs (more than requested) by the newly signaled TE LSP implies a larger amount of bandwidth to be rerouted, which is likely to increase the probability of blocking (inability to find a path for some TE LSPs).
* 抢占仍然满足请求的最小带宽量。可以提高资源利用率。新发信号的TE-LSP抢占较大的TE-LSP(大于请求的)意味着要重新路由的带宽量较大,这可能增加阻塞的概率(无法找到某些TE-LSP的路径)。
* Preempt LSPs that minimize the blocking probability (risk that preempted TE LSP cannot be rerouted).
* 使阻塞概率最小化的抢占LSP(抢占的TE LSP无法重新路由的风险)。
After the preemption selection phase is finished, the selected LSPs are signaled as preempted and the new LSP is established (if a new path satisfying the constraints can be found). The UB information is then updated via flooding of an IGP-TE update and/or simply pruning the link where preemption occurred. Figure 1 shows a flowchart that summarizes how each LSP setup request is treated in a preemption-enabled scenario.
在抢占选择阶段结束后,所选LSP被发信号称为抢占,并建立新LSP(如果可以找到满足约束的新路径)。然后通过IGP-TE更新的泛洪和/或简单地修剪发生抢占的链路来更新UB信息。图1显示了一个流程图,总结了在启用抢占的场景中如何处理每个LSP设置请求。
LSP Setup Request (TE-Class i, bw=r) | | v NO UB[TE-Class i] >= r ? -------> Reject LSP Setup and flood an updated IGP-TE | LSA/LSP |YES v NO Preemption Needed ? -------> Setup LSP/Update UB if a threshold is | crossed | YES v Preemption ----> Setup LSP/Reroute Preempted LSPs Algorithm Update UB
LSP Setup Request (TE-Class i, bw=r) | | v NO UB[TE-Class i] >= r ? -------> Reject LSP Setup and flood an updated IGP-TE | LSA/LSP |YES v NO Preemption Needed ? -------> Setup LSP/Update UB if a threshold is | crossed | YES v Preemption ----> Setup LSP/Reroute Preempted LSPs Algorithm Update UB
Figure 1: Flowchart for LSP setup procedure.
图1:LSP设置程序的流程图。
In [DEC-PREP], the authors propose connection preemption policies that optimize the discussed criteria in a given order of importance: number of LSPs, bandwidth, and priority; bandwidth, priority, and number of LSPs. The novelty in our approach is the use of an objective function that can be adjusted by the service provider in order to stress the desired criteria. No particular criteria order is enforced. Moreover, a new criterion is added to the objective function: optimize the blocking probability (to minimize the risk that an LSP is not rerouted after being preempted).
在[DEC-PREP]中,作者提出了连接抢占策略,以给定的重要性顺序优化讨论的标准:LSP数量、带宽和优先级;带宽、优先级和LSP的数量。我们的方法的新颖之处在于使用了一个目标函数,该函数可以由服务提供商进行调整,以强调所需的标准。没有强制执行特定的标准顺序。此外,在目标函数中增加了一个新的标准:优化阻塞概率(以最小化LSP在被抢占后未被重新路由的风险)。
The decision of preempting an LSP may cause other preemptions in the network. This is called preemption cascading effect and different cascading levels may be achieved by the preemption of a single LSP. The cascading levels are defined in the following manner: when an LSP is preempted and rerouted without causing any further preemption, the cascading is said to be of level zero. However, when a preempted LSP is rerouted, and in order to be established in the new route it also causes the preemption of other LSPs, the cascading is said to be of level 1, and so on.
抢占LSP的决定可能导致网络中的其他抢占。这称为抢占级联效应,通过抢占单个LSP可以实现不同的级联级别。级联级别按以下方式定义:当LSP被抢占并重新路由而不引起任何进一步的抢占时,级联称为零级。然而,当被抢占的LSP被重新路由时,为了在新路由中建立,它还导致其他LSP的抢占,级联被称为级别1,依此类推。
Preemption cascading is not desirable and therefore policies that minimize it are of interest. Typically, this can result in severe network instabilities. In Section 5, a new versatile preemption heuristic will be presented. In Section 6, preemption simulation results will be discussed and the cascading effect will be analyzed.
抢占级联是不可取的,因此将其最小化的策略是令人感兴趣的。通常,这会导致严重的网络不稳定。在第5节中,将介绍一种新的通用抢占启发式算法。第6节将讨论抢占模拟结果,并分析级联效应。
It is important to note that once a request for an LSP setup arrives, each LSR along the TE LSP path checks the available bandwidth on its outgoing link. For the links in which the available bandwidth is not enough, the preemption policy needs to be activated in order to guarantee the end-to-end bandwidth reservation for the new LSP. This is a distributed approach, in which every node on the path is responsible for running the preemption algorithm and determining which LSPs would be preempted in order to fit the new request. A distributed approach may not lead to an optimal solution.
需要注意的是,一旦LSP设置请求到达,TE LSP路径上的每个LSR都会检查其输出链路上的可用带宽。对于可用带宽不足的链路,需要激活抢占策略以保证新LSP的端到端带宽预留。这是一种分布式方法,其中路径上的每个节点负责运行抢占算法并确定哪些LSP将被抢占以适应新请求。分布式方法可能不会导致最佳解决方案。
Alternatively, in a centralized approach, a manager entity runs the preemption policy and determines the best LSPs to be preempted in order to free the required bandwidth in all the links that compose the path. The preemption policy would try to select LSPs that overlap with the path being considered (preempt a single LSP that overlaps with the route versus preempt a single LSP on every link that belongs to the route).
或者,在集中式方法中,管理器实体运行抢占策略并确定要抢占的最佳LSP,以便在构成路径的所有链路中释放所需带宽。抢占策略将尝试选择与所考虑的路径重叠的LSP(抢占与路由重叠的单个LSP,而不是在属于路由的每个链路上抢占单个LSP)。
Both centralized and distributed approaches have advantages and drawbacks. A centralized approach would be more precise, but require that the whole network state be stored and updated accordingly, which raises scalability issues. In a network where LSPs are mostly static, an offline decision can be made to reroute LSPs and the centralized approach could be appropriate. However, in a dynamic network in which LSPs are set up and torn down in a frequent manner because of new TE LSPs, bandwidth increase, reroute due to failure, etc., the correctness of the stored network state could be questionable. Moreover, the setup time is generally increased when compared to a distributed solution. In this scenario, the distributed approach would bring more benefits, even when resulting in a non-optimal solution (The gain in optimality of a centralized approach compared to a distributed approach depends on many factors: network topology, traffic matrix, TE strategy, etc.). A distributed approach is also easier to be implemented due to the distributed nature of the current Internet protocols.
集中式和分布式方法各有优缺点。集中式方法更精确,但需要存储整个网络状态并相应地更新,这会引起可伸缩性问题。在LSP大多是静态的网络中,可以做出离线决定来重新路由LSP,而集中式方法可能是合适的。然而,在动态网络中,由于新的TE-lsp、带宽增加、故障导致的重新路由等原因,lsp被频繁地设置和拆除,存储的网络状态的正确性可能会受到质疑。此外,与分布式解决方案相比,设置时间通常会增加。在这种情况下,分布式方法将带来更多的好处,即使在产生非最优解决方案的情况下也是如此(与分布式方法相比,集中式方法的优化效果取决于许多因素:网络拓扑、流量矩阵、TE策略等)。由于当前互联网协议的分布式特性,分布式方法也更容易实现。
Since the current Internet routing protocols are essentially distributed, a decentralized approach was selected for the LSP preemption policy. The parameters required by the new preemption policies are currently available for OSPF and Intermediate System to Intermediate System (IS-IS).
由于当前的Internet路由协议基本上是分布式的,因此LSP抢占策略选择了一种分散的方法。新抢占策略所需的参数目前可用于OSPF和中间系统到中间系统(IS-IS)。
Consider a request for a new LSP setup with bandwidth b and setup preemption priority p. When preemption is needed, due to lack of available resources, the preemptable LSPs will be chosen among the ones with lower holding preemption priority (higher numerical value) in order to fit r=b-Abw(l). The variable r represents the actual bandwidth that needs to be preempted (the requested, b, minus the available bandwidth on link l: Abw(l)).
考虑使用带宽B和设置抢占优先级P的新的LSP设置的请求。当需要抢占时,由于缺乏可用资源,将在持有抢占优先级较低(数值较高)的LSP中选择可抢占LSP,以适应r=b-Abw(l)。变量r表示需要抢占的实际带宽(请求的带宽b减去链路l上的可用带宽:Abw(l))。
L is the set of active LSPs having a holding preemption priority lower (numerically higher) than p. So L is the set of candidates for preemption. b(l) is the bandwidth reserved by LSP l in L, expressed in bandwidth units, and p(l) is the holding preemption priority of LSP l.
L是一组主动LSP,其持有抢占优先级低于(数值上高于)p。所以L是优先权的候选集合。b(l)是LSP l在l中保留的带宽,用带宽单位表示,p(l)是LSP l的持有抢占优先级。
In order to represent a cost for each preemption priority, an associated cost y(l) inversely related to the holding preemption priority p(l) is defined. For simplicity, a linear relation y(l)=8-p(l) is chosen. y is a cost vector with L components, y(l). b is a reserved bandwidth vector with dimension L, and components b(l).
为了表示每个抢占优先权的成本,定义了与持有抢占优先权p(l)成反比的相关成本y(l)。为简单起见,选择线性关系y(l)=8-p(l)。y是包含L个分量的成本向量,y(L)。b是维数为L的保留带宽向量,分量为b(L)。
Concerning the objective function, four main objectives can be reached in the selection of preempted LSPs:
关于目标函数,在选择抢占LSP时可以达到四个主要目标:
* minimize the priority of preempted LSPs, * minimize the number of preempted LSPs, * minimize the preempted bandwidth, * minimize the blocking probability.
* 最小化抢占LSP的优先级,*最小化抢占LSP的数量,*最小化抢占带宽,*最小化阻塞概率。
To have the widest choice on the overall objective that each service provider needs to achieve, the following equation was defined (for simplicity chosen as a weighted sum of the above mentioned criteria):
为了对每个服务提供商需要实现的总体目标进行最广泛的选择,定义了以下等式(为简单起见,选择上述标准的加权和):
H(l)= alpha y(l) + beta 1/b(l) + gamma (b(l)-r)^2 + theta b(l)
H(l)= alpha y(l) + beta 1/b(l) + gamma (b(l)-r)^2 + theta b(l)
In this equation:
在这个等式中:
- alpha y(l) captures the cost of preempting high priority LSPs.
- alpha y(l)捕获抢占高优先级LSP的成本。
- beta 1/b(l) penalizes the preemption of low bandwidth LSPs, capturing the cost of preempting a large number of LSPs.
- beta 1/b(l)惩罚抢占低带宽LSP,从而获得抢占大量LSP的成本。
- gamma (b(l)-r)^2 captures the cost of preemption of LSPs that are much larger or much smaller than r.
- gamma(b(l)-r^2捕获了比r大得多或小得多的LSP的抢占成本。
- theta b(l) captures the cost of preempting large LSPs.
- θb(l)捕获了抢占大型LSP的成本。
Coefficients alpha, beta, gamma, and theta can be chosen to emphasize one or more components of H.
可以选择系数α、β、γ和θ来强调H的一个或多个分量。
The coefficient theta is defined such that theta = 0 if gamma > 0. This is because when trying to minimize the blocking probability of preempted LSPs, the heuristic gives preference to preempting several small LSPs (therefore gamma, which is the weight for minimizing the preempted bandwidth enforcing the selection of LSPs with similar amount of bandwidth as the requested, needs to be set as zero). The selection of several small LSPs in a normally loaded portion of the network will increase the chance that such LSPs are successfully rerouted. Moreover, the selection of several small LSPs may not imply preempting much more than the required bandwidth (resulting in low-bandwidth wastage), as it will be seen in the discussed examples. When preemption is to happen in a heavy loaded portion of the network, to minimize blocking probability, the heuristic will select fewer LSPs for preemption in order to increase the chance of rerouting.
定义系数θ时,如果γ>0,则θ=0。这是因为当试图最小化抢占LSP的阻塞概率时,启发式优先于抢占几个小LSP(因此,gamma是用于最小化抢占带宽的权重,强制选择与请求的带宽量相似的LSP,需要设置为零)。在网络的正常加载部分选择几个小型LSP将增加此类LSP成功重新路由的机会。此外,选择几个小LSP可能并不意味着抢占比所需带宽多得多的带宽(导致低带宽损耗),这将在所讨论的示例中看到。当抢占发生在网络的重负载部分时,为了最小化阻塞概率,启发式将选择较少的LSP进行抢占,以增加重新路由的机会。
H is calculated for each LSP in L. The LSPs to be preempted are chosen as the ones with smaller H that add enough bandwidth to accommodate r. When sorting LSPs by H, LSPs with the same value for H are ordered by bandwidth b, in increasing order. For each LSP with repeated H, the algorithm checks whether the bandwidth b assigned to only that LSP is enough to satisfy r. If there is no such LSP, it checks whether the bandwidth of each of those LSPs added to the previously preempted LSPs' bandwidth is enough to satisfy r. If that is not true for any LSP in that repeated H-value sequence, the algorithm preempts the LSP that has the larger amount of bandwidth in the sequence, and keeps preempting in decreasing order of b until r is satisfied or the sequence is finished. If the sequence is finished and r is not satisfied, the algorithm again selects LSPs to be preempted based on an increasing order of H. More details on the algorithm are given in [PREEMPTION].
为L中的每个LSP计算H。选择要抢占的LSP作为H较小的LSP,其增加足够的带宽以容纳r。按H排序LSP时,H值相同的LSP按带宽b按递增顺序排序。对于每个重复H的LSP,该算法检查仅分配给该LSP的带宽b是否足以满足r。如果没有这样的LSP,它将检查添加到先前抢占的LSP带宽中的每个LSP的带宽是否足以满足r。如果该重复的H值序列中的任何LSP不是这样,则该算法抢占序列中带宽量较大的LSP,并以b的降序保持抢占,直到满足r或序列完成。如果序列完成且r不满足,则该算法再次根据H的递增顺序选择要抢占的LSP。有关该算法的更多详细信息,请参见[抢占]。
When the objective is to minimize blocking, the heuristic will follow two options on how to calculate H:
当目标是最小化阻塞时,启发式将遵循两个关于如何计算H的选项:
* If the link in which preemption is to happen is normally loaded, several small LSPs will be selected for preemption using H(l)= alpha y(l) + theta b(l).
* 如果发生抢占的链路是正常加载的,那么将使用H(l)=alpha y(l)+θb(l)选择几个小的lsp进行抢占。
* If the link is overloaded, few LSPs are selected using H(l)= alpha y(l) + beta 1/b(l).
* 如果链路过载,则使用H(l)=alpha y(l)+beta 1/b(l)选择少数LSP。
We first consider a very simple case, in which the path considered for preemption is composed by a single hop. The objective of this example is to illustrate how the heuristic works. In the next section, we will study a more complex case in which the preemption policies are being tested on a network.
我们首先考虑一个非常简单的情况,其中优先考虑的路径是由单跳组成的。这个例子的目的是说明启发式是如何工作的。在下一节中,我们将研究在网络上测试抢占策略的更复杂的情况。
Consider a link with 16 LSPs with reserved bandwidth b in Mbps, preemption holding priority p, and cost y, as shown in Table 1. In this example, 8 TE-Classes are active. The preemption here is being performed on a single link as an illustrative example.
如表1所示,考虑一个具有16个LSPs的链路,在MPS中保留带宽B,抢占优先权P和成本Y。在本例中,8个TE类处于活动状态。这里的抢占是在单个链路上执行的,作为示例。
------------------------------------------------------------------ LSP L1 L2 L3 L4 L5 L6 L7 L8 ------------------------------------------------------------------ Bandwidth (b) 20 10 60 25 20 1 75 45 Priority (p) 1 2 3 4 5 6 7 5 Cost (y) 7 6 5 4 3 2 1 3 ------------------------------------------------------------------ LSP L9 L10 L11 L12 L13 L14 L15 L16 ------------------------------------------------------------------ Bandwidth (b) 100 5 40 85 50 20 70 25 Priority (p) 3 6 4 5 2 3 4 7 Cost (y) 5 2 4 3 6 5 4 1 ------------------------------------------------------------------ Table 1: LSPs in the considered link.
------------------------------------------------------------------ LSP L1 L2 L3 L4 L5 L6 L7 L8 ------------------------------------------------------------------ Bandwidth (b) 20 10 60 25 20 1 75 45 Priority (p) 1 2 3 4 5 6 7 5 Cost (y) 7 6 5 4 3 2 1 3 ------------------------------------------------------------------ LSP L9 L10 L11 L12 L13 L14 L15 L16 ------------------------------------------------------------------ Bandwidth (b) 100 5 40 85 50 20 70 25 Priority (p) 3 6 4 5 2 3 4 7 Cost (y) 5 2 4 3 6 5 4 1 ------------------------------------------------------------------ Table 1: LSPs in the considered link.
A request for an LSP establishment arrives with r=175 Mbps and p=0 (highest possible priority, which implies that all LSPs with p>0 in Table 1 will be considered when running the algorithm). Assume Abw(l)=0.
LSP建立请求到达时r=175 Mbps,p=0(可能的最高优先级,这意味着在运行算法时将考虑表1中p>0的所有LSP)。假设Abw(l)=0。
If priority is the only important criterion, the network operator configures alpha=1, beta=gamma=theta=0. In this case, LSPs L6, L7, L10, L12, and L16 are selected for preemption, freeing 191 bandwidth units to establish the high-priority LSP. Note that 5 LSPs were preempted, but all with a priority level between 5 and 7.
如果优先级是唯一重要的标准,则网络运营商配置alpha=1,beta=gamma=theta=0。在这种情况下,选择LSP L6、L7、L10、L12和L16进行抢占,释放191个带宽单元以建立高优先级LSP。请注意,5个LSP被抢占,但所有LSP的优先级都在5到7之间。
In a network in which rerouting is an expensive task to perform (and the number of rerouted TE LSPs should be as small as possible), one might prefer to set beta=1 and alpha=gamma=theta=0. LSPs L9 and L12 would then be selected for preemption, adding up to 185 bandwidth units (less wastage than the previous case). The priorities of the selected LSPs are 3 and 5 (which means that they might themselves preempt some other LSPs when rerouted).
在重路由是一项昂贵任务的网络中(重路由TE LSP的数量应尽可能少),人们可能更喜欢将beta=1和alpha=gamma=theta=0。然后选择LSP L9和L12进行抢占,总共185个带宽单元(比前一种情况下的损耗更少)。所选LSP的优先级为3和5(这意味着它们在重新路由时可能会抢占其他LSP)。
Suppose the network operator decides that it is more appropriate to configure alpha=1, beta=10, gamma=0, theta=0 (the parameters were set to values that would balance the weight of each component, namely priority and number, in the cost function), because in this network rerouting is very expensive, LSP priority is important, but bandwidth is not a critical issue. In this case, LSPs L7, L12, and L16 are selected for preemption. This configuration results in a smaller number of preempted LSPs when compared to the first case, and the priority levels are kept between 5 and 7.
假设网络运营商决定更适合配置alpha=1、beta=10、gamma=0、theta=0(参数被设置为在成本函数中平衡每个组件的权重,即优先级和数量的值),因为在这种网络中,重新路由非常昂贵,LSP优先级很重要,但带宽不是一个关键问题。在这种情况下,选择LSP L7、L12和L16进行抢占。与第一种情况相比,这种配置导致抢占LSP的数量更少,并且优先级保持在5到7之间。
To take into account the number of LSPs preempted, the preemption priority, and the amount of bandwidth preempted, the network operator may set alpha > 0, beta > 0, and gamma > 0. To achieve a balance among the three components, the parameters need to be normalized. Aiming for a balance, the parameters could be set as alpha=1, beta=10 (bringing the term 1/b(l) closer to the other parameters), and gamma=0.001 (bringing the value of the term (b(l)-r)^2 closer to the other parameters). LSPs L7 and L9 are selected for preemption, resulting in exactly 175 bandwidth units and with priorities 3 and 7 (note that less LSP are preempted but they have a higher priority which may result in a cascading effect).
为了考虑被抢占的lsp的数量、抢占优先级和被抢占的带宽量,网络运营商可以设置alpha>0、beta>0和gamma>0。为了在三个组件之间实现平衡,需要对参数进行规范化。为了达到平衡,可以将参数设置为alpha=1、beta=10(使项1/b(l)更接近其他参数)和gamma=0.001(使项(b(l)-r)^2的值更接近其他参数)。LSP L7和L9被选择用于抢占,结果正好是175个带宽单元,优先级为3和7(请注意,抢占的LSP较少,但优先级较高,这可能导致级联效应)。
If the minimization of the blocking probability is the criterion of most interest, the cost function could be configured with theta=1, alpha=beta=gamma=0. In that case, several small LSPs are selected for preemption: LSPs L2, L4, L5, L6, L7, L10, L14, and L16. Their preemption will free 181 Mbps in this link, and because the selected LSPs have small bandwidth requirement there is a good chance that each of them will find a new route in the network.
如果阻塞概率最小化是最重要的标准,则成本函数可以配置为θ=1,α=β=γ=0。在这种情况下,选择几个小的LSP进行抢占:LSP L2、L4、L5、L6、L7、L10、L14和L16。他们的抢占将在该链路中释放181 Mbps的带宽,并且由于所选LSP的带宽需求较小,因此他们中的每个人都很有可能在网络中找到新的路由。
From the above example, it can be observed that when the priority was the highest concern and the number of preempted LSPs was not an issue, 5 LSPs with the lowest priority were selected for preemption. When only the number of LSPs was an issue, the minimum number of LSPs was selected for preemption: 2, but the priority was higher than in the previous case. When priority and number were important factors and a possible waste of bandwidth was not an issue, 3 LSPs were selected, adding more bandwidth than requested, but still with low preemption priority. When considering all the parameters but the blocking probability, the smallest set of LSP was selected, 2, adding just enough bandwidth, 175 Mbps, and with priority levels 3 and 7.
从上面的示例可以看出,当优先级是最高关注点且抢占LSP的数量不是问题时,选择了5个优先级最低的LSP进行抢占。当只有LSP的数量是一个问题时,为抢占选择了最小LSP数量:2,但优先级高于前一种情况。当优先级和数量是重要因素且可能的带宽浪费不是问题时,选择3个LSP,增加的带宽超过请求的带宽,但抢占优先级仍然较低。当考虑除阻塞概率以外的所有参数时,选择最小的LSP集2,添加刚好足够的带宽175Mbps,优先级为3和7。
When the blocking probability was the criterion of interest, several (8) small LSPs were preempted. The bandwidth wastage is low, but the number of rerouting events will increase. Given the bandwidth requirement of the preempted LSPs, it is expected that the chances of finding a new route for each LSP will be high.
当阻塞概率是关注的标准时,几个(8)个小LSP被抢占。带宽损耗较低,但重新路由事件的数量将增加。考虑到抢占LSP的带宽要求,预计为每个LSP找到新路由的机会将很高。
For these experiments, we consider a 150 nodes topology with an average network connectivity of 3. 10% of the nodes in the topology have a degree of connectivity of 6. 10% of the links are OC3, 70% are OC48, and 20% are OC192.
对于这些实验,我们考虑具有平均网络连通性3的150节点拓扑。拓扑中10%的节点的连通度为6。10%的链接是OC3,70%是OC48,20%是OC192。
Two classes of TE LSPs are in use: Voice LSPs and Data Internet/VPN LSPs. For each class of TE LSP, the set of preemptions (and the proportion of LSPs for each preemption) and the size distributions are as follows (a total of T LSPs is considered):
使用两类TE LSP:语音LSP和数据互联网/VPN LSP。对于每类TE LSP,抢占集(以及每个抢占的LSP比例)和大小分布如下所示(总共考虑T个LSP):
T: total number of TE LSPs in the network (T = 18,306 LSPs)
T:网络中TE LSP的总数(T=18306 LSP)
Voice:
声音:
Number: 20% of T Preemption: 0, 1 and 2 Size: uniform distribution between 30M and 50M
数量:20%T抢占:0、1和2尺寸:在30M和50M之间均匀分布
Internet/VPN TE:
互联网/虚拟专用网:
Number: 4% of T Preemption: 3 Size: uniform distribution between 20M and 50M
数量:4%T抢占:3尺寸:在20M和50M之间均匀分布
Number: 8% of T Preemption 4 Size: uniform distribution between 15M and 40M
数量:T抢占4的8%尺寸:在15M和40M之间均匀分布
Number: 8% of T Preemption 5 Size: uniform distribution between 10M and 20M
数量:T抢占5的8%尺寸:在10M和20M之间均匀分布
Number: 20% of T Preemption 6 Size: uniform distribution between 1M and 20M
数量:T抢占6的20%尺寸:在1M和20M之间均匀分布
Number: 40% of T Preemption 7 Size: uniform distribution between 1K and 1M
数量:40%的T抢占7尺寸:在1K和1M之间均匀分布
LSPs are set up mainly due to network failure: a link or a node failed and LSPs are rerouted.
设置LSP的主要原因是网络故障:链路或节点故障,LSP被重新路由。
The network failure events were simulated with two functions:
通过两个功能模拟网络故障事件:
- Constant: 1 failure chosen randomly among the set of links every 1 hour.
- 常数:每1小时从一组链接中随机选择1个故障。
- Poisson process with interarrival average = 1 hour.
- 到达间隔平均值为1小时的泊松过程。
Table 2 shows the results for simulations with constant failure. The simulations were run with the preemption heuristic configured to balance different criteria (left side of the table), and then with different preemption policies that consider the criteria in a given order of importance rather than balancing them (right side of the table).
表2显示了持续失效模拟的结果。该仿真是用预先配置的启发式算法来执行的,以平衡不同的标准(表的左侧),然后用不同的先占策略,考虑给定的重要性顺序中的标准,而不是平衡它们(表的右侧)。
The proposed heuristic was configured to balance the following criteria:
建议的启发式配置为平衡以下标准:
HPB: The heuristic with priority and bandwidth wastage as the most important criteria (alpha=10, beta=0, gamma=0.001, theta=0).
HPB:以优先级和带宽损耗为最重要标准的启发式算法(α=10,β=0,γ=0.001,θ=0)。
HBlock: The heuristic considering the minimization of blocking probability (normal load links: alpha=1, beta=0, gamma=0, theta=0.01) (heavy load links: alpha=1, beta=10).
HBlock:考虑阻塞概率最小化的启发式算法(正常负载链路:α=1,β=0,γ=0,θ=0.01)(重负载链路:α=1,β=10)。
HNB: The heuristic with number of preemptions and wasted bandwidth in consideration (alpha=0, beta=10, gamma=0.001, theta=0).
HNB:考虑抢占次数和浪费带宽的启发式算法(α=0,β=10,γ=0.001,θ=0)。
Other algorithms that consider the criteria in a given order of importance:
在给定的重要性顺序中考虑标准的其他算法:
P: Sorts candidate LSPs by priority only.
P:仅按优先级对候选LSP排序。
PN: Sorts the LSPs by priority, and for cases in which the priority is the same, orders those LSPs by decreasing bandwidth (selects larger LSPs for preemption in order to minimize number of preempted LSPs).
PN:按优先级对LSP进行排序,对于优先级相同的情况,通过减少带宽对这些LSP进行排序(选择较大的LSP进行抢占,以尽量减少抢占的LSP数量)。
PB: Sorts the LSPs by priority, and for LSPs with the same priority, sorts those by increasing bandwidth (select smaller LSPs in order to reduce bandwidth wastage).
PB:按优先级对LSP进行排序,对于具有相同优先级的LSP,通过增加带宽对其进行排序(选择较小的LSP以减少带宽浪费)。
------------------------------------------------- | Heuristic | Other algorithms | ------------------------------------------------- | HPB | HBlock| HNB | P | PN | PB | ----------------------------------------------------------------- Need to be | 532 | 532 | 532 | 532 | 532 | 532 | Rerouted | | | | | | | ----------------------------------------------------------------- Preempted | 612 | 483 | 619 | 504 | 477 | 598 | ----------------------------------------------------------------- Rerouted |467|76%|341|73%|475|77%|347|69%|335|70%|436|73%| Blocked |145|24%|130|27%|144|23%|157|31%|142|30%|162|27%| ----------------------------------------------------------------- Max Cascading | 4.5 | 2 | 5 | 2.75 | 2 | 2.75 | ----------------------------------------------------------------- Wasted Bandwidth| | | | | | | AVR (Mbps) | 6638 | 532 | 6479 | 8247 | 8955 | 6832 | Worst Case(Mbps)| 35321 |26010 |36809 | 28501 | 31406 | 23449 | ----------------------------------------------------------------- Priority | | | | | | | Average | 6 | 6.5 | 5.8 | 6.6 | 6.6 | 6.6 | Worst Case | 1.5 | 3.8 | 1.2 | 3.8 | 3.8 | 3.8 | ----------------------------------------------------------------- Extra Hops | | | | | | | Average | 0.23 | 0.25 | 0.22 | 0.25 | 0.25 | 0.23 | Worst Case | 3.25 | 3 | 3.25 | 3 | 3 | 2.75 | ----------------------------------------------------------------- Table 2: Simulation results for constant network failure: 1 random failure every hour.
------------------------------------------------- | Heuristic | Other algorithms | ------------------------------------------------- | HPB | HBlock| HNB | P | PN | PB | ----------------------------------------------------------------- Need to be | 532 | 532 | 532 | 532 | 532 | 532 | Rerouted | | | | | | | ----------------------------------------------------------------- Preempted | 612 | 483 | 619 | 504 | 477 | 598 | ----------------------------------------------------------------- Rerouted |467|76%|341|73%|475|77%|347|69%|335|70%|436|73%| Blocked |145|24%|130|27%|144|23%|157|31%|142|30%|162|27%| ----------------------------------------------------------------- Max Cascading | 4.5 | 2 | 5 | 2.75 | 2 | 2.75 | ----------------------------------------------------------------- Wasted Bandwidth| | | | | | | AVR (Mbps) | 6638 | 532 | 6479 | 8247 | 8955 | 6832 | Worst Case(Mbps)| 35321 |26010 |36809 | 28501 | 31406 | 23449 | ----------------------------------------------------------------- Priority | | | | | | | Average | 6 | 6.5 | 5.8 | 6.6 | 6.6 | 6.6 | Worst Case | 1.5 | 3.8 | 1.2 | 3.8 | 3.8 | 3.8 | ----------------------------------------------------------------- Extra Hops | | | | | | | Average | 0.23 | 0.25 | 0.22 | 0.25 | 0.25 | 0.23 | Worst Case | 3.25 | 3 | 3.25 | 3 | 3 | 2.75 | ----------------------------------------------------------------- Table 2: Simulation results for constant network failure: 1 random failure every hour.
From Table 2, we can conclude that among the heuristic (HPB, HBlock, HNB) results, HBlock resulted in the smaller number of LSPs being preempted. More importantly, it also resulted in an overall smaller rejection rate and smaller average wasted bandwidth (and second overall smaller worst-case wasted bandwidth.)
从表2中,我们可以得出结论,在启发式(HPB、HBlock、HNB)结果中,HBlock导致被抢占的LSP数量较少。更重要的是,它还导致了总体较小的拒绝率和较小的平均浪费带宽(以及第二个总体较小的最坏情况下的浪费带宽)
Although HBlock does not try to minimize the number of preempted LSPs, it ends up doing so, because it preempts LSPs with lower priority mostly, and therefore it does not propagate cascading much further. Cascading was the overall lowest (preemption caused at most two levels of preemption, which was also the case for the policy PN). The average and worst preemption priority was very satisfactory (preempting mostly lowest-priority LSPs, like the other algorithms P, PN, and PB).
虽然HBlock不尝试最小化被抢占的LSP的数量,但它最终会这样做,因为它主要抢占优先级较低的LSP,因此不会进一步传播级联。级联是整体最低的(抢占导致最多两个级别的抢占,策略PN也是如此)。平均和最差抢占优先级非常令人满意(抢占大多数最低优先级LSP,与其他算法P、PN和PB一样)。
When HPB was in use, more LSPs were preempted as a consequence of the higher cascading effect. That is due to the heuristic's choice of preempting LSPs that are very similar in bandwidth size to the
当使用HPB时,由于更高的级联效应,更多的LSP被抢占。这是由于启发式选择了抢占LSP,这些LSP在带宽大小上与
bandwidth size of the preemptor LSP (which can result in preempting a higher priority LSP and therefore causing cascading). The wasted bandwidth was reduced when compared to the other algorithms (P, PN, PB).
抢占器LSP的带宽大小(这可能导致抢占更高优先级的LSP,从而导致级联)。与其他算法(P、PN、PB)相比,浪费的带宽有所减少。
When HNB was used, cascading was higher than the other cases, due to the fact that LSPs with higher priority could be preempted. When compared to P, PN, or PB, the heuristic HNB preempted more LSPs (in fact, it preempted the largest number of LSPs overall, clearly showing the cascading effect), but the average wasted bandwidth was smaller, although not as small as HBlock's (the HNB heuristic tries to preempt a single LSP, meaning it will preempt LSPs that have a reserved bandwidth similar to the actual bandwidth needed. The algorithm is not always successful, because such a match may not exist, and in that case, the wasted bandwidth could be high). The preempted priority was the highest on average and worse case, which also shows why the cascading level was also the highest (the heuristic tries to select LSPs for preemption without looking at their priority levels). In summary, this policy resulted in a poor performance.
使用HNB时,级联比其他情况更高,因为具有更高优先级的LSP可以被抢占。与P、PN或PB相比,启发式HNB抢占了更多的LSP(事实上,它抢占了最大数量的LSP,清楚地显示了级联效应),但平均浪费带宽更小,尽管没有HBlock的那么小(HNB启发式尝试抢占单个LSP,这意味着它将抢占保留带宽与实际所需带宽相似的LSP。该算法并不总是成功,因为这样的匹配可能不存在,在这种情况下,浪费的带宽可能很高)。平均而言,抢占优先级最高,情况更糟,这也说明了级联级别也是最高的原因(启发式尝试在不查看优先级的情况下选择LSP进行抢占)。总之,此策略导致性能较差。
Policy PN resulted in the small number of preempted LSPs overall and small number of LSPs not successfully rerouted. Cascading is low, but bandwidth wastage is very high (overall highest bandwidth wastage). Moreover, in several cases in which rerouting happened on portions of the network that were underloaded, the heuristic HBlock preempted a smaller number of LSPs than PN.
策略PN导致整体抢占的LSP数量较少,且少数LSP未成功重新路由。级联虽然很低,但带宽损耗非常高(总体最高带宽损耗)。此外,在一些情况下,重路由发生在网络的欠载部分,启发式HBlock抢占的LSP数量比PN少。
Policy P selects a larger number of LSPs (when compared to PN) with low priority for preemption, and therefore it is able to successfully reroute less LSPs when compared to HBlock, HPB, HNB, or PN. The bandwidth wastage is also higher when compared to any of the heuristic results or to PB, and it could be worse if the network had LSPs with a low priority and large bandwidth, which is not the case.
策略P选择更多优先级较低的LSP(与PN相比)进行抢占,因此与HBlock、HPB、HNB或PN相比,它能够成功地重新路由更少的LSP。与任何启发式结果或PB相比,带宽浪费也更高,如果网络具有低优先级和大带宽的LSP,情况可能会更糟,但事实并非如此。
Policy PB, when compared to PN, resulted in a larger number of preempted LSPs and an overall larger number of blocked LSPs (not rerouted), due to preemption. Cascading was slightly higher. Since the selected LSPs have low priority, they are not able to preempt much further and are blocked when the links are congested. Bandwidth wastage was smaller since the policy tries to minimize wastage, and preempted priority was about the same on average and worst case.
策略PB与PN相比,由于抢占,导致抢占的LSP数量更多,而阻塞的LSP(未重新路由)数量总体更大。级联效应略高。由于所选LSP的优先级较低,它们无法进一步抢占,并且在链路拥塞时被阻塞。由于该策略试图最大限度地减少损耗,带宽损耗更小,抢占优先级在平均和最坏情况下大致相同。
The simulation results show that when preemption is based on priority, cascading is not critical since the preempted LSPs will not be able to propagate preemption much further. When the number of LSPs is considered, fewer LSPs are preempted and the chances of rerouting increases. When bandwidth wastage is considered, smaller
仿真结果表明,当抢占基于优先级时,级联并不重要,因为被抢占的LSP将无法进一步传播抢占。当考虑LSP的数量时,抢占的LSP更少,重路由的机会增加。当考虑带宽损耗时,较小的
LSPs are preempted in each link and the wasted bandwidth is low. The heuristic seems to combine these features, yielding the best results, especially in the case of blocking probability. When the heuristic was configured to minimize blocking probability (HBlock), small LSPs with low priority were selected for preemption on normally loaded links and fewer (larger) LSPs with low priority were selected on congested links. Due to their low priority, cascading was not an issue. Several LSPs were selected for preemption, but the rate of LSPs that were not successfully rerouted was the lowest. Since the LSPs are small, it is easier to find a new route in the network. When selecting LSPs on a congested link, fewer larger LSPs are selected improving load balance. Moreover, the bandwidth wastage was the overall lowest. In summary, the heuristic is very flexible and can be configured according to the network provider's best interest regarding the considered criteria.
LSP在每个链路中被抢占,并且浪费的带宽很低。启发式似乎结合了这些特征,产生了最好的结果,尤其是在阻塞概率的情况下。当启发式配置为最小化阻塞概率(HBlock)时,在正常加载的链路上选择具有低优先级的小LSP进行抢占,在拥塞链路上选择具有低优先级的更少(更大)LSP。由于优先级较低,级联不是问题。选择了几个LSP进行抢占,但未成功重新路由的LSP的比率最低。由于LSP较小,因此更容易在网络中找到新路由。在拥塞链路上选择LSP时,选择的较大LSP越少,负载平衡越好。此外,带宽损耗是总体最低的。总之,启发式是非常灵活的,并且可以根据网络提供商关于所考虑的标准的最佳利益进行配置。
For several cases, the failure of a link resulted in no preemption at all (all LSPs were able to find an alternate path in the network) or resulted in preemption of very few LSPs and subsequent successfully rerouting of the same with no cascading effect.
在某些情况下,链路故障根本不会导致抢占(所有LSP都能够在网络中找到备用路径),或者导致抢占极少数LSP,并随后成功地重新路由相同的LSP,而不会产生级联效应。
It is also important to note that for all policies in use, the number of extra hops when LSPs are rerouted was not critical, showing that preempted LSPs can be rerouted on a path with the same length or a path that is slightly longer in number of hops.
还需要注意的是,对于正在使用的所有策略,当LSP被重新路由时,额外的跳数并不重要,这表明抢占的LSP可以在具有相同长度的路径上或在跳数稍长的路径上被重新路由。
The practice described in this document does not raise specific security issues beyond those of existing TE.
本文件中描述的实践不会提出现有TE以外的具体安全问题。
We would like to acknowledge the input and helpful comments from Francois Le Faucheur (Cisco Systems) and George Uhl (Swales Aerospace).
我们感谢Francois Le Faucheur(思科系统公司)和George Uhl(斯瓦尔斯航空航天公司)的投入和有益的评论。
[ATM-PREP] Poretsky, S. and Gannon, T., "An Algorithm for Connection Precedence and Preemption in Asynchronous Transfer Mode (ATM) Networks", Proceedings of IEEE ICC 1998.
[ATM-PREP]Poretsky,S.和Gannon,T.,“异步传输模式(ATM)网络中连接优先和抢占的算法”,IEEE ICC会议记录1998。
[DEC-PREP] Peyravian, M. and Kshemkalyani, A. D. , "Decentralized Network Connection Preemption Algorithms", Computer Networks and ISDN Systems, vol. 30 (11), pp. 1029-1043, June 1998.
[DEC-PREP]Peyravian,M.和Kshemkalyani,A.D.,“分散网络连接抢占算法”,计算机网络和ISDN系统,第30卷(11),第1029-1043页,1998年6月。
[PREEMPTION] de Oliveira, J. C. et al., "A New Preemption Policy for DiffServ-Aware Traffic Engineering to Minimize Rerouting", Proceedings of IEEE INFOCOM 2002.
[PREEMPTION]de Oliveira,J.C.等人,“区分服务感知流量工程的一种新的抢占策略,以最大限度地减少重路由”,IEEE INFOCOM 2002年论文集。
[RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. McManus, "Requirements for Traffic Engineering Over MPLS", RFC 2702, September 1999.
[RFC2702]Awduche,D.,Malcolm,J.,Agogbua,J.,O'Dell,M.,和J.McManus,“MPLS上的流量工程要求”,RFC 2702,1999年9月。
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 3209, December 2001.
[RFC3209]Awduche,D.,Berger,L.,Gan,D.,Li,T.,Srinivasan,V.,和G.Swallow,“RSVP-TE:LSP隧道RSVP的扩展”,RFC 3209,2001年12月。
[RFC3272] Awduche, D., Chiu, A., Elwalid, A., Widjaja, I., and X. Xiao, "Overview and Principles of Internet Traffic Engineering", RFC 3272, May 2002.
[RFC3272]Awduche,D.,Chiu,A.,Elwalid,A.,Widjaja,I.,和X.Xiao,“互联网流量工程概述和原则”,RFC 3272,2002年5月。
[RFC3564] Le Faucheur, F. and W. Lai, "Requirements for Support of Differentiated Services-aware MPLS Traffic Engineering", RFC 3564, July 2003.
[RFC3564]Le Faucheur,F.和W.Lai,“支持区分服务的MPLS流量工程的要求”,RFC 3564,2003年7月。
[RFC4124] Le Faucheur, F., "Protocol Extensions for Support of Diffserv-aware MPLS Traffic Engineering", RFC 4124, June 2005.
[RFC4124]Le Faucheur,F.“支持区分服务的MPLS流量工程的协议扩展”,RFC 41242005年6月。
[RFC4126] Ash, J., "Max Allocation with Reservation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering & Performance Comparisons", RFC 4126, June 2005.
[RFC4126]Ash,J.“区分服务感知MPLS流量工程和性能比较的保留带宽约束最大分配模型”,RFC 4126,2005年6月。
[RFC4127] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering", RFC 4127, June 2005.
[RFC4127]Le Faucheur,F.“区分服务感知MPLS流量工程的俄罗斯玩偶带宽约束模型”,RFC 4127,2005年6月。
[RFC4128] Lai, W., "Bandwidth Constraints Models for Differentiated Services (Diffserv)-aware MPLS Traffic Engineering: Performance Evaluation", RFC 4128, June 2005.
[RFC4128]Lai,W.“区分服务(Diffserv)感知MPLS流量工程的带宽约束模型:性能评估”,RFC 41282005年6月。
Authors' Addresses
作者地址
Jaudelice C. de Oliveira (editor) Drexel University 3141 Chestnut Street (ECE Dept.) Philadelphia, PA 19104 USA
Jaudelice C.de Oliveira(编辑)美国宾夕法尼亚州费城德雷塞尔大学板栗街3141号(欧洲经委会系),邮编:19104
EMail: jau@ece.drexel.edu
EMail: jau@ece.drexel.edu
JP Vasseur (editor) Cisco Systems, Inc. 1414 Massachusetts Avenue Boxborough, MA 01719 USA
JP Vasseur(编辑)思科系统公司,美国马萨诸塞州伯斯堡马萨诸塞大道1414号,邮编01719
EMail: jpv@cisco.com
EMail: jpv@cisco.com
Leonardo Chen Verizon Laboratories 40 Sylvan Rd. LA0MS55 Waltham, MA 02451 USA
美国马萨诸塞州沃尔瑟姆Sylvan路40号莱昂纳多·陈·威瑞森实验室,邮编:02451
EMail: leonardo.c.chen@verizon.com
EMail: leonardo.c.chen@verizon.com
Caterina Scoglio Kansas State University 2061 Rathbone Hall Manhattan, Kansas 66506-5204 USA
卡特琳娜·斯科利奥堪萨斯州立大学曼哈顿拉斯伯恩大厅2061号,堪萨斯州66506-5204
EMail: caterina@eece.ksu.edu
EMail: caterina@eece.ksu.edu
Full Copyright Statement
完整版权声明
Copyright (C) The IETF Trust (2007).
版权所有(C)IETF信托基金(2007年)。
This document is subject to the rights, licenses and restrictions contained in BCP 78 and at www.rfc-editor.org/copyright.html, and except as set forth therein, the authors retain all their rights.
本文件受BCP 78和www.rfc-editor.org/copyright.html中包含的权利、许可和限制的约束,除其中规定外,作者保留其所有权利。
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
本文件及其包含的信息以“原样”为基础提供,贡献者、他/她所代表或赞助的组织(如有)、互联网协会、IETF信托基金和互联网工程任务组不承担任何明示或暗示的担保,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。
Intellectual Property
知识产权
The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79.
IETF对可能声称与本文件所述技术的实施或使用有关的任何知识产权或其他权利的有效性或范围,或此类权利下的任何许可可能或可能不可用的程度,不采取任何立场;它也不表示它已作出任何独立努力来确定任何此类权利。有关RFC文件中权利的程序信息,请参见BCP 78和BCP 79。
Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr.
向IETF秘书处披露的知识产权副本和任何许可证保证,或本规范实施者或用户试图获得使用此类专有权利的一般许可证或许可的结果,可从IETF在线知识产权存储库获取,网址为http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org.
IETF邀请任何相关方提请其注意任何版权、专利或专利申请,或其他可能涵盖实施本标准所需技术的专有权利。请将信息发送至IETF的IETF-ipr@ietf.org.
Acknowledgement
确认
Funding for the RFC Editor function is currently provided by the Internet Society.
RFC编辑功能的资金目前由互联网协会提供。