Internet Engineering Task Force (IETF) B. Constantine Request for Comments: 7640 JDSU Category: Informational R. Krishnan ISSN: 2070-1721 Dell Inc. September 2015
Internet Engineering Task Force (IETF) B. Constantine Request for Comments: 7640 JDSU Category: Informational R. Krishnan ISSN: 2070-1721 Dell Inc. September 2015
Traffic Management Benchmarking
交通管理基准
Abstract
摘要
This framework describes a practical methodology for benchmarking the traffic management capabilities of networking devices (i.e., policing, shaping, etc.). The goals are to provide a repeatable test method that objectively compares performance of the device's traffic management capabilities and to specify the means to benchmark traffic management with representative application traffic.
该框架描述了一种实用的方法,用于对网络设备的流量管理能力进行基准测试(即,管理、塑造等)。目标是提供一种可重复的测试方法,客观地比较设备流量管理功能的性能,并指定使用代表性应用程序流量对流量管理进行基准测试的方法。
Status of This Memo
关于下段备忘
This document is not an Internet Standards Track specification; it is published for informational purposes.
本文件不是互联网标准跟踪规范;它是为了提供信息而发布的。
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.
本文件是互联网工程任务组(IETF)的产品。它代表了IETF社区的共识。它已经接受了公众审查,并已被互联网工程指导小组(IESG)批准出版。并非IESG批准的所有文件都适用于任何级别的互联网标准;见RFC 5741第2节。
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7640.
有关本文件当前状态、任何勘误表以及如何提供反馈的信息,请访问http://www.rfc-editor.org/info/rfc7640.
Copyright Notice
版权公告
Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.
版权所有(c)2015 IETF信托基金和确定为文件作者的人员。版权所有。
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
本文件受BCP 78和IETF信托有关IETF文件的法律规定的约束(http://trustee.ietf.org/license-info)自本文件出版之日起生效。请仔细阅读这些文件,因为它们描述了您对本文件的权利和限制。从本文件中提取的代码组件必须包括信托法律条款第4.e节中所述的简化BSD许可证文本,并提供简化BSD许可证中所述的无担保。
Table of Contents
目录
1. Introduction ....................................................3 1.1. Traffic Management Overview ................................3 1.2. Lab Configuration and Testing Overview .....................5 2. Conventions Used in This Document ...............................6 3. Scope and Goals .................................................7 4. Traffic Benchmarking Metrics ...................................10 4.1. Metrics for Stateless Traffic Tests .......................10 4.2. Metrics for Stateful Traffic Tests ........................12 5. Tester Capabilities ............................................13 5.1. Stateless Test Traffic Generation .........................13 5.1.1. Burst Hunt with Stateless Traffic ..................14 5.2. Stateful Test Pattern Generation ..........................14 5.2.1. TCP Test Pattern Definitions .......................15 6. Traffic Benchmarking Methodology ...............................17 6.1. Policing Tests ............................................17 6.1.1. Policer Individual Tests ...........................18 6.1.2. Policer Capacity Tests .............................19 6.1.2.1. Maximum Policers on Single Physical Port ..20 6.1.2.2. Single Policer on All Physical Ports ......22 6.1.2.3. Maximum Policers on All Physical Ports ....22 6.2. Queue/Scheduler Tests .....................................23 6.2.1. Queue/Scheduler Individual Tests ...................23 6.2.1.1. Testing Queue/Scheduler with Stateless Traffic .........................23 6.2.1.2. Testing Queue/Scheduler with Stateful Traffic ..........................25 6.2.2. Queue/Scheduler Capacity Tests .....................28 6.2.2.1. Multiple Queues, Single Port Active .......28 6.2.2.1.1. Strict Priority on Egress Port ....................28 6.2.2.1.2. Strict Priority + WFQ on Egress Port ....................29 6.2.2.2. Single Queue per Port, All Ports Active ...30 6.2.2.3. Multiple Queues per Port, All Ports Active ..............................31 6.3. Shaper Tests ..............................................32 6.3.1. Shaper Individual Tests ............................32 6.3.1.1. Testing Shaper with Stateless Traffic .....33 6.3.1.2. Testing Shaper with Stateful Traffic ......34 6.3.2. Shaper Capacity Tests ..............................36 6.3.2.1. Single Queue Shaped, All Physical Ports Active ..............................37 6.3.2.2. All Queues Shaped, Single Port Active .....37 6.3.2.3. All Queues Shaped, All Ports Active .......39
1. Introduction ....................................................3 1.1. Traffic Management Overview ................................3 1.2. Lab Configuration and Testing Overview .....................5 2. Conventions Used in This Document ...............................6 3. Scope and Goals .................................................7 4. Traffic Benchmarking Metrics ...................................10 4.1. Metrics for Stateless Traffic Tests .......................10 4.2. Metrics for Stateful Traffic Tests ........................12 5. Tester Capabilities ............................................13 5.1. Stateless Test Traffic Generation .........................13 5.1.1. Burst Hunt with Stateless Traffic ..................14 5.2. Stateful Test Pattern Generation ..........................14 5.2.1. TCP Test Pattern Definitions .......................15 6. Traffic Benchmarking Methodology ...............................17 6.1. Policing Tests ............................................17 6.1.1. Policer Individual Tests ...........................18 6.1.2. Policer Capacity Tests .............................19 6.1.2.1. Maximum Policers on Single Physical Port ..20 6.1.2.2. Single Policer on All Physical Ports ......22 6.1.2.3. Maximum Policers on All Physical Ports ....22 6.2. Queue/Scheduler Tests .....................................23 6.2.1. Queue/Scheduler Individual Tests ...................23 6.2.1.1. Testing Queue/Scheduler with Stateless Traffic .........................23 6.2.1.2. Testing Queue/Scheduler with Stateful Traffic ..........................25 6.2.2. Queue/Scheduler Capacity Tests .....................28 6.2.2.1. Multiple Queues, Single Port Active .......28 6.2.2.1.1. Strict Priority on Egress Port ....................28 6.2.2.1.2. Strict Priority + WFQ on Egress Port ....................29 6.2.2.2. Single Queue per Port, All Ports Active ...30 6.2.2.3. Multiple Queues per Port, All Ports Active ..............................31 6.3. Shaper Tests ..............................................32 6.3.1. Shaper Individual Tests ............................32 6.3.1.1. Testing Shaper with Stateless Traffic .....33 6.3.1.2. Testing Shaper with Stateful Traffic ......34 6.3.2. Shaper Capacity Tests ..............................36 6.3.2.1. Single Queue Shaped, All Physical Ports Active ..............................37 6.3.2.2. All Queues Shaped, Single Port Active .....37 6.3.2.3. All Queues Shaped, All Ports Active .......39
6.4. Concurrent Capacity Load Tests ............................40 7. Security Considerations ........................................40 8. References .....................................................41 8.1. Normative References ......................................41 8.2. Informative References ....................................42 Appendix A. Open Source Tools for Traffic Management Testing ......44 Appendix B. Stateful TCP Test Patterns ............................45 Acknowledgments ...................................................51 Authors' Addresses ................................................51
6.4. Concurrent Capacity Load Tests ............................40 7. Security Considerations ........................................40 8. References .....................................................41 8.1. Normative References ......................................41 8.2. Informative References ....................................42 Appendix A. Open Source Tools for Traffic Management Testing ......44 Appendix B. Stateful TCP Test Patterns ............................45 Acknowledgments ...................................................51 Authors' Addresses ................................................51
Traffic management (i.e., policing, shaping, etc.) is an increasingly important component when implementing network Quality of Service (QoS).
在实现网络服务质量(QoS)时,流量管理(即,管理、塑造等)是一个越来越重要的组成部分。
There is currently no framework to benchmark these features, although some standards address specific areas as described in Section 1.1.
目前还没有对这些特性进行基准测试的框架,尽管有些标准涉及第1.1节所述的特定领域。
This document provides a framework to conduct repeatable traffic management benchmarks for devices and systems in a lab environment.
本文档提供了一个框架,用于在实验室环境中为设备和系统执行可重复的流量管理基准测试。
Specifically, this framework defines the methods to characterize the capacity of the following traffic management features in network devices: classification, policing, queuing/scheduling, and traffic shaping.
具体而言,该框架定义了描述网络设备中以下流量管理功能的容量的方法:分类、策略、排队/调度和流量整形。
This benchmarking framework can also be used as a test procedure to assist in the tuning of traffic management parameters before service activation. In addition to Layer 2/3 (Ethernet/IP) benchmarking, Layer 4 (TCP) test patterns are proposed by this document in order to more realistically benchmark end-user traffic.
此基准测试框架还可用作测试程序,以帮助在服务激活之前调整交通管理参数。除了第2/3层(以太网/IP)基准测试外,本文还提出了第4层(TCP)测试模式,以便更真实地基准测试最终用户流量。
In general, a device with traffic management capabilities performs the following functions:
通常,具有流量管理功能的设备执行以下功能:
- Traffic classification: identifies traffic according to various configuration rules (for example, IEEE 802.1Q Virtual LAN (VLAN), Differentiated Services Code Point (DSCP)) and marks this traffic internally to the network device. Multiple external priorities (DSCP, 802.1p, etc.) can map to the same priority in the device.
- 流量分类:根据各种配置规则(例如,IEEE 802.1Q虚拟LAN(VLAN)、区分服务代码点(DSCP))识别流量,并在网络设备内部标记此流量。多个外部优先级(DSCP、802.1p等)可以映射到设备中的同一优先级。
- Traffic policing: limits the rate of traffic that enters a network device according to the traffic classification. If the traffic exceeds the provisioned limits, the traffic is either dropped or remarked and forwarded onto the next network device.
- 流量管理:根据流量分类限制进入网络设备的流量速率。如果流量超过规定的限制,则会丢弃或标记流量,并将其转发到下一个网络设备。
- Traffic scheduling: provides traffic classification within the network device by directing packets to various types of queues and applies a dispatching algorithm to assign the forwarding sequence of packets.
- 流量调度:通过将数据包定向到各种类型的队列来提供网络设备内的流量分类,并应用调度算法来分配数据包的转发顺序。
- Traffic shaping: controls traffic by actively buffering and smoothing the output rate in an attempt to adapt bursty traffic to the configured limits.
- 流量整形:通过主动缓冲和平滑输出速率来控制流量,以使突发流量适应配置的限制。
- Active Queue Management (AQM): involves monitoring the status of internal queues and proactively dropping (or remarking) packets, which causes hosts using congestion-aware protocols to "back off" and in turn alleviate queue congestion [RFC7567]. On the other hand, classic traffic management techniques reactively drop (or remark) packets based on queue-full conditions. The benchmarking scenarios for AQM are different and are outside the scope of this testing framework.
- 主动队列管理(AQM):涉及监控内部队列的状态并主动丢弃(或重新标记)数据包,这会导致使用拥塞感知协议的主机“后退”,进而缓解队列拥塞[RFC7567]。另一方面,经典的流量管理技术根据队列满的情况反应性地丢弃(或评论)数据包。AQM的基准测试场景不同,不在本测试框架的范围内。
Even though AQM is outside the scope of this framework, it should be noted that the TCP metrics and TCP test patterns (defined in Sections 4.2 and 5.2, respectively) could be useful to test new AQM algorithms (targeted to alleviate "bufferbloat"). Examples of these algorithms include Controlled Delay [CoDel] and Proportional Integral controller Enhanced [PIE].
尽管AQM不在本框架的范围内,但应注意的是,TCP度量和TCP测试模式(分别在第4.2节和第5.2节中定义)可用于测试新的AQM算法(旨在缓解“缓冲区膨胀”)。这些算法的示例包括受控延迟[CoDel]和比例积分控制器增强[PIE]。
The following diagram is a generic model of the traffic management capabilities within a network device. It is not intended to represent all variations of manufacturer traffic management capabilities, but it provides context for this test framework.
下图是网络设备内流量管理功能的通用模型。它并不表示制造商流量管理功能的所有变体,但它为该测试框架提供了上下文。
|----------| |----------------| |--------------| |----------| | | | | | | | | |Interface | |Ingress Actions | |Egress Actions| |Interface | |Ingress | |(classification,| |(scheduling, | |Egress | |Queues | | marking, | | shaping, | |Queues | | |-->| policing, or |-->| active queue |-->| | | | | shaping) | | management, | | | | | | | | remarking) | | | |----------| |----------------| |--------------| |----------|
|----------| |----------------| |--------------| |----------| | | | | | | | | |Interface | |Ingress Actions | |Egress Actions| |Interface | |Ingress | |(classification,| |(scheduling, | |Egress | |Queues | | marking, | | shaping, | |Queues | | |-->| policing, or |-->| active queue |-->| | | | | shaping) | | management, | | | | | | | | remarking) | | | |----------| |----------------| |--------------| |----------|
Figure 1: Generic Traffic Management Capabilities of a Network Device
图1:网络设备的通用流量管理功能
Ingress actions such as classification are defined in [RFC4689] and include IP addresses, port numbers, and DSCP. In terms of marking, [RFC2697] and [RFC2698] define a Single Rate Three Color Marker and a Two Rate Three Color Marker, respectively.
[RFC4689]中定义了诸如分类之类的进入操作,包括IP地址、端口号和DSCP。在标记方面,[RFC2697]和[RFC2698]分别定义了单速率三色标记和双速率三色标记。
The Metro Ethernet Forum (MEF) specifies policing and shaping in terms of ingress and egress subscriber/provider conditioning functions as described in MEF 12.2 [MEF-12.2], as well as ingress and bandwidth profile attributes as described in MEF 10.3 [MEF-10.3] and MEF 26.1 [MEF-26.1].
Metro Ethernet Forum(MEF)根据MEF 12.2[MEF-12.2]中所述的入口和出口用户/提供商调节功能,以及MEF 10.3[MEF-10.3]和MEF 26.1[MEF-26.1]中所述的入口和带宽配置文件属性,规定了监管和塑造。
The following diagram shows the lab setup for the traffic management tests:
下图显示了流量管理测试的实验室设置:
+--------------+ +-------+ +----------+ +-----------+ | Transmitting | | | | | | Receiving | | Test Host | | | | | | Test Host | | |-----| Device|---->| Network |--->| | | | | Under | | Delay | | | | | | Test | | Emulator | | | | |<----| |<----| |<---| | | | | | | | | | +--------------+ +-------+ +----------+ +-----------+
+--------------+ +-------+ +----------+ +-----------+ | Transmitting | | | | | | Receiving | | Test Host | | | | | | Test Host | | |-----| Device|---->| Network |--->| | | | | Under | | Delay | | | | | | Test | | Emulator | | | | |<----| |<----| |<---| | | | | | | | | | +--------------+ +-------+ +----------+ +-----------+
Figure 2: Lab Setup for Traffic Management Tests
图2:流量管理测试的实验室设置
As shown in the test diagram, the framework supports unidirectional and bidirectional traffic management tests (where the transmitting and receiving roles would be reversed on the return path).
如测试图所示,该框架支持单向和双向流量管理测试(其中发送和接收角色将在返回路径上颠倒)。
This testing framework describes the tests and metrics for each of the following traffic management functions:
该测试框架描述了以下每个流量管理功能的测试和指标:
- Classification
- 分类
- Policing
- 维持治安
- Queuing/scheduling
- 排队/排程
- Shaping
- 塑造
The tests are divided into individual and rated capacity tests. The individual tests are intended to benchmark the traffic management functions according to the metrics defined in Section 4. The capacity tests verify traffic management functions under the load of many simultaneous individual tests and their flows.
试验分为单独试验和额定容量试验。单独测试旨在根据第4节中定义的指标对交通管理功能进行基准测试。容量测试验证了多个同时进行的单独测试及其流量负载下的流量管理功能。
This involves concurrent testing of multiple interfaces with the specific traffic management function enabled, and increasing the load to the capacity limit of each interface.
这涉及在启用特定流量管理功能的情况下对多个接口进行并发测试,并将负载增加到每个接口的容量限制。
For example, a device is specified to be capable of shaping on all of its egress ports. The individual test would first be conducted to benchmark the specified shaping function against the metrics defined in Section 4. Then, the capacity test would be executed to test the shaping function concurrently on all interfaces and with maximum traffic load.
例如,设备被指定能够在其所有出口端口上成形。首先进行单独测试,以根据第4节中定义的指标对指定的成形功能进行基准测试。然后,将执行容量测试,以在所有接口上以最大流量负载同时测试成形功能。
The Network Delay Emulator (NDE) is required for TCP stateful tests in order to allow TCP to utilize a TCP window of significant size in its control loop.
TCP状态测试需要网络延迟模拟器(NDE),以便允许TCP在其控制回路中利用较大的TCP窗口。
Note also that the NDE SHOULD be passive in nature (e.g., a fiber spool). This is recommended to eliminate the potential effects that an active delay element (i.e., test impairment generator) may have on the test flows. In the case where a fiber spool is not practical due to the desired latency, an active NDE MUST be independently verified to be capable of adding the configured delay without loss. In other words, the Device Under Test (DUT) would be removed and the NDE performance benchmarked independently.
还要注意,无损检测本质上应该是被动的(例如,光纤线轴)。建议这样做是为了消除活动延迟元件(即测试损害发生器)可能对测试流产生的潜在影响。如果由于期望的延迟,光纤假脱机不实用,则必须独立验证活动NDE是否能够添加配置的延迟而不丢失。换句话说,被测设备(DUT)将被移除,NDE性能将独立进行基准测试。
Note that the NDE SHOULD be used only as emulated delay. Most NDEs allow for per-flow delay actions, emulating QoS prioritization. For this framework, the NDE's sole purpose is simply to add delay to all packets (emulate network latency). So, to benchmark the performance of the NDE, the maximum offered load should be tested against the following frame sizes: 128, 256, 512, 768, 1024, 1500, and 9600 bytes. The delay accuracy at each of these packet sizes can then be used to calibrate the range of expected Bandwidth-Delay Product (BDP) for the TCP stateful tests.
注意,NDE应仅用作模拟延迟。大多数NDE允许每流延迟操作,模拟QoS优先级。对于这个框架,NDE的唯一目的只是向所有数据包添加延迟(模拟网络延迟)。因此,为了测试NDE的性能,应针对以下帧大小测试提供的最大负载:128、256、512、768、1024、1500和9600字节。然后,每个数据包大小的延迟精度可用于校准TCP有状态测试的预期带宽延迟乘积(BDP)范围。
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
本文件中的关键词“必须”、“不得”、“必需”、“应”、“不应”、“应”、“不应”、“建议”、“可”和“可选”应按照[RFC2119]中所述进行解释。
The following acronyms are used:
使用以下首字母缩略词:
AQM: Active Queue Management
主动队列管理
BB: Bottleneck Bandwidth
瓶颈带宽
BDP: Bandwidth-Delay Product
带宽延迟积
BSA: Burst Size Achieved
BSA:达到突发大小
CBS: Committed Burst Size
CBS:提交的突发大小
CIR: Committed Information Rate
提交信息速率
DUT: Device Under Test
DUT:被测设备
EBS: Excess Burst Size
EBS:超出突发大小
EIR: Excess Information Rate
EIR:超额信息率
NDE: Network Delay Emulator
网络延迟仿真器
QL: Queue Length
QL:队列长度
QoS: Quality of Service
QoS:服务质量
RTT: Round-Trip Time
RTT:往返时间
SBB: Shaper Burst Bytes
SBB:整形器突发字节
SBI: Shaper Burst Interval
SBI:整形器突发间隔
SP: Strict Priority
SP:严格优先级
SR: Shaper Rate
SR:成型率
SSB: Send Socket Buffer
发送套接字缓冲区
SUT: System Under Test
SUT:测试中的系统
Ti: Transmission Interval
Ti:传输间隔
TTP: TCP Test Pattern
TTP:TCP测试模式
TTPET: TCP Test Pattern Execution Time
TTPET:TCP测试模式执行时间
The scope of this work is to develop a framework for benchmarking and testing the traffic management capabilities of network devices in the lab environment. These network devices may include but are not limited to:
这项工作的范围是开发一个框架,用于在实验室环境中对网络设备的流量管理功能进行基准测试和测试。这些网络设备可包括但不限于:
- Switches (including Layer 2/3 devices)
- 交换机(包括第2/3层设备)
- Routers
- 路由器
- Firewalls
- 防火墙
- General Layer 4-7 appliances (Proxies, WAN Accelerators, etc.)
- 通用第4-7层设备(代理、广域网加速器等)
Essentially, any network device that performs traffic management as defined in Section 1.1 can be benchmarked or tested with this framework.
本质上,任何执行第1.1节中定义的流量管理的网络设备都可以使用该框架进行基准测试或测试。
The primary goal is to assess the maximum forwarding performance deemed to be within the provisioned traffic limits that a network device can sustain without dropping or impairing packets, and without compromising the accuracy of multiple instances of traffic management functions. This is the benchmark for comparison between devices.
主要目标是评估被视为在网络设备能够维持而不丢弃或损害分组,并且不损害流量管理功能的多个实例的准确性的规定流量限制内的最大转发性能。这是设备之间比较的基准。
Within this framework, the metrics are defined for each traffic management test but do not include pass/fail criteria, which are not within the charter of the BMWG. This framework provides the test methods and metrics to conduct repeatable testing, which will provide the means to compare measured performance between DUTs.
在此框架内,为每个流量管理测试定义了指标,但不包括通过/失败标准,这不在BMWG章程内。该框架提供了进行可重复测试的测试方法和指标,这将提供比较DUT之间测量性能的方法。
As mentioned in Section 1.2, these methods describe the individual tests and metrics for several management functions. It is also within scope that this framework will benchmark each function in terms of overall rated capacity. This involves concurrent testing of multiple interfaces with the specific traffic management function enabled, up to the capacity limit of each interface.
如第1.2节所述,这些方法描述了几个管理功能的单独测试和指标。该框架还将根据总体额定容量对每个功能进行基准测试。这涉及在启用特定流量管理功能的情况下对多个接口进行并发测试,直至每个接口的容量限制。
It is not within the scope of this framework to specify the procedure for testing multiple configurations of traffic management functions concurrently. The multitudes of possible combinations are almost unbounded, and the ability to identify functional "break points" would be almost impossible.
同时测试多个交通管理功能配置的程序不在本框架的范围内。大量可能的组合几乎是无界的,识别功能“断点”的能力几乎是不可能的。
However, Section 6.4 provides suggestions for some profiles of concurrent functions that would be useful to benchmark. The key requirement for any concurrent test function is that tests MUST produce reliable and repeatable results.
然而,第6.4节提供了一些对基准测试有用的并发函数配置文件的建议。任何并发测试功能的关键要求是测试必须产生可靠和可重复的结果。
Also, it is not within scope to perform conformance testing. Tests defined in this framework benchmark the traffic management functions according to the metrics defined in Section 4 and do not address any conformance to standards related to traffic management.
此外,执行一致性测试也不在范围之内。本框架中定义的测试根据第4节中定义的指标对流量管理功能进行基准测试,不涉及任何与流量管理相关的标准一致性。
The current specifications don't specify exact behavior or implementation, and the specifications that do exist (cited in Section 1.1) allow implementations to vary with regard to short-term rate accuracy and other factors. This is a primary driver for this framework: to provide an objective means to compare vendor traffic management functions.
目前的规范没有规定确切的行为或实施,现有的规范(在第1.1节中引用)允许实施在短期利率准确性和其他因素方面有所不同。这是该框架的主要驱动因素:提供一种比较供应商流量管理功能的客观方法。
Another goal is to devise methods that utilize flows with congestion-aware transport (TCP) as part of the traffic load and still produce repeatable results in the isolated test environment. This framework will derive stateful test patterns (TCP or application layer) that can also be used to further benchmark the performance of applicable traffic management techniques such as queuing/scheduling and traffic shaping. In cases where the network device is stateful in nature (i.e., firewall, etc.), stateful test pattern traffic is important to test, along with stateless UDP traffic in specific test scenarios (i.e., applications using TCP transport and UDP VoIP, etc.).
另一个目标是设计方法,利用具有拥塞感知传输(TCP)的流作为流量负载的一部分,并在隔离的测试环境中仍然产生可重复的结果。该框架将派生有状态测试模式(TCP或应用层),也可用于进一步基准测试适用流量管理技术(如排队/调度和流量整形)的性能。在网络设备本质上是有状态的情况下(如防火墙等),有状态测试模式流量与特定测试场景中的无状态UDP流量(如使用TCP传输和UDP VoIP等的应用程序)一起测试非常重要。
As mentioned earlier in this document, repeatability of test results is critical, especially considering the nature of stateful TCP traffic. To this end, the stateful tests will use TCP test patterns to emulate applications. This framework also provides guidelines for application modeling and open source tools to achieve the repeatable stimulus. Finally, TCP metrics from [RFC6349] MUST be measured for each stateful test and provide the means to compare each repeated test.
正如本文档前面提到的,测试结果的可重复性是至关重要的,特别是考虑到有状态TCP流量的性质。为此,有状态测试将使用TCP测试模式来模拟应用程序。该框架还为应用程序建模和开源工具提供指导,以实现可重复的刺激。最后,必须为每个有状态测试测量[RFC6349]中的TCP指标,并提供比较每个重复测试的方法。
Even though this framework targets the testing of TCP applications (i.e., web, email, database, etc.), it could also be applied to the Stream Control Transmission Protocol (SCTP) in terms of test patterns. WebRTC, Signaling System 7 (SS7) signaling, and 3GPP are SCTP-based applications that could be modeled with this framework to benchmark SCTP's effect on traffic management performance.
尽管该框架的目标是测试TCP应用程序(即web、电子邮件、数据库等),但它也可以应用于流控制传输协议(SCTP)的测试模式。WebRTC、信令系统7(SS7)信令和3GPP是基于SCTP的应用程序,可以使用此框架对其进行建模,以测试SCTP对流量管理性能的影响。
Note that at the time of this writing, this framework does not address tcpcrypt (encrypted TCP) test patterns, although the metrics defined in Section 4.2 can still be used because the metrics are based on TCP retransmission and RTT measurements (versus any of the payload). Thus, if tcpcrypt becomes popular, it would be natural for benchmarkers to consider encrypted TCP patterns and include them in test cases.
请注意,在撰写本文时,该框架并未涉及tcpcrypt(加密TCP)测试模式,尽管第4.2节中定义的指标仍然可以使用,因为这些指标基于TCP重传和RTT测量(相对于任何有效负载)。因此,如果TCPLIPT变得流行,基准测试人员会考虑加密的TCP模式并将其包含在测试用例中。
The metrics to be measured during the benchmarks are divided into two (2) sections: packet-layer metrics used for the stateless traffic testing and TCP-layer metrics used for the stateful traffic testing.
基准测试期间要测量的指标分为两(2)部分:用于无状态流量测试的数据包层指标和用于有状态流量测试的TCP层指标。
Stateless traffic measurements require that a sequence number and timestamp be inserted into the payload for lost-packet analysis. Delay analysis may be achieved by insertion of timestamps directly into the packets or timestamps stored elsewhere (packet captures). This framework does not specify the packet format to carry sequence number or timing information.
无状态流量测量要求在负载中插入序列号和时间戳,以进行丢失数据包分析。延迟分析可以通过将时间戳直接插入包或存储在别处的时间戳(包捕获)来实现。该框架不指定用于携带序列号或定时信息的数据包格式。
However, [RFC4737] and [RFC4689] provide recommendations for sequence tracking, along with definiti