Network Working Group                                           B. Aboba
Request for Comments: 3539                                     Microsoft
Category: Standards Track                                        J. Wood
                                                  Sun Microsystems, Inc.
                                                               June 2003
        
Network Working Group                                           B. Aboba
Request for Comments: 3539                                     Microsoft
Category: Standards Track                                        J. Wood
                                                  Sun Microsystems, Inc.
                                                               June 2003
        

Authentication, Authorization and Accounting (AAA) Transport Profile

身份验证、授权和记帐(AAA)传输配置文件

Status of this Memo

本备忘录的状况

This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.

本文件规定了互联网社区的互联网标准跟踪协议,并要求进行讨论和提出改进建议。有关本协议的标准化状态和状态,请参考当前版本的“互联网官方协议标准”(STD 1)。本备忘录的分发不受限制。

Copyright Notice

版权公告

Copyright (C) The Internet Society (2003). All Rights Reserved.

版权所有(C)互联网协会(2003年)。版权所有。

Abstract

摘要

This document discusses transport issues that arise within protocols for Authentication, Authorization and Accounting (AAA). It also provides recommendations on the use of transport by AAA protocols. This includes usage of standards-track RFCs as well as experimental proposals.

本文档讨论身份验证、授权和记帐(AAA)协议中出现的传输问题。它还提供了使用AAA协议传输的建议。这包括使用标准跟踪RFC以及实验建议。

Table of Contents

目录

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  2
       1.1.  Requirements Language. . . . . . . . . . . . . . . . . .  2
       1.2.  Terminology. . . . . . . . . . . . . . . . . . . . . . .  2
   2.  Issues in Transport Usage. . . . . . . . . . . . . . . . . . .  5
       2.1.  Application-driven Versus Network-driven . . . . . . . .  5
       2.2.  Slow Failover. . . . . . . . . . . . . . . . . . . . . .  6
       2.3.  Use of Nagle Algorithm . . . . . . . . . . . . . . . . .  7
       2.4.  Multiple Connections . . . . . . . . . . . . . . . . . .  7
       2.5.  Duplicate Detection. . . . . . . . . . . . . . . . . . .  8
       2.6.  Invalidation of Transport Parameter Estimates. . . . . .  8
       2.7.  Inability to use Fast Re-Transmit. . . . . . . . . . . .  9
       2.8.  Congestion Avoidance . . . . . . . . . . . . . . . . . .  9
       2.9.  Delayed Acknowledgments. . . . . . . . . . . . . . . . . 11
       2.10. Premature Failover . . . . . . . . . . . . . . . . . . . 11
       2.11. Head of Line Blocking. . . . . . . . . . . . . . . . . . 11
       2.12. Connection Load Balancing. . . . . . . . . . . . . . . . 12
        
   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  2
       1.1.  Requirements Language. . . . . . . . . . . . . . . . . .  2
       1.2.  Terminology. . . . . . . . . . . . . . . . . . . . . . .  2
   2.  Issues in Transport Usage. . . . . . . . . . . . . . . . . . .  5
       2.1.  Application-driven Versus Network-driven . . . . . . . .  5
       2.2.  Slow Failover. . . . . . . . . . . . . . . . . . . . . .  6
       2.3.  Use of Nagle Algorithm . . . . . . . . . . . . . . . . .  7
       2.4.  Multiple Connections . . . . . . . . . . . . . . . . . .  7
       2.5.  Duplicate Detection. . . . . . . . . . . . . . . . . . .  8
       2.6.  Invalidation of Transport Parameter Estimates. . . . . .  8
       2.7.  Inability to use Fast Re-Transmit. . . . . . . . . . . .  9
       2.8.  Congestion Avoidance . . . . . . . . . . . . . . . . . .  9
       2.9.  Delayed Acknowledgments. . . . . . . . . . . . . . . . . 11
       2.10. Premature Failover . . . . . . . . . . . . . . . . . . . 11
       2.11. Head of Line Blocking. . . . . . . . . . . . . . . . . . 11
       2.12. Connection Load Balancing. . . . . . . . . . . . . . . . 12
        
   3.  AAA Transport Profile. . . . . . . . . . . . . . . . . . . . . 12
       3.1.  Transport Mappings . . . . . . . . . . . . . . . . . . . 12
       3.2.  Use of Nagle Algorithm . . . . . . . . . . . . . . . . . 12
       3.3.  Multiple Connections . . . . . . . . . . . . . . . . . . 13
       3.4.  Application Layer Watchdog . . . . . . . . . . . . . . . 13
       3.5.  Duplicate Detection. . . . . . . . . . . . . . . . . . . 19
       3.6.  Invalidation of Transport Parameter Estimates. . . . . . 20
       3.7.  Inability to use Fast Re-Transmit. . . . . . . . . . . . 21
       3.8.  Head of Line Blocking. . . . . . . . . . . . . . . . . . 22
       3.9.  Congestion Avoidance . . . . . . . . . . . . . . . . . . 23
       3.10. Premature Failover . . . . . . . . . . . . . . . . . . . 24
   4.  Security Considerations. . . . . . . . . . . . . . . . . . . . 24
   5.  IANA Considerations. . . . . . . . . . . . . . . . . . . . . . 25
   6.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 25
       6.1.  Normative References . . . . . . . . . . . . . . . . . . 25
       6.2.  Informative References . . . . . . . . . . . . . . . . . 26
   Appendix A - Detailed Watchdog Algorithm Description . . . . . . . 28
   Appendix B - AAA Agents. . . . . . . . . . . . . . . . . . . . . . 33
       B.1.  Relays and Proxies . . . . . . . . . . . . . . . . . . . 33
       B.2.  Re-directs . . . . . . . . . . . . . . . . . . . . . . . 35
       B.3.  Store and Forward Proxies. . . . . . . . . . . . . . . . 36
       B.4.  Transport Layer Proxies. . . . . . . . . . . . . . . . . 38
   Intellectual Property Statement. . . . . . . . . . . . . . . . . . 39
   Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . 39
   Author Addresses . . . . . . . . . . . . . . . . . . . . . . . . . 40
   Full Copyright Statement . . . . . . . . . . . . . . . . . . . . . 41
        
   3.  AAA Transport Profile. . . . . . . . . . . . . . . . . . . . . 12
       3.1.  Transport Mappings . . . . . . . . . . . . . . . . . . . 12
       3.2.  Use of Nagle Algorithm . . . . . . . . . . . . . . . . . 12
       3.3.  Multiple Connections . . . . . . . . . . . . . . . . . . 13
       3.4.  Application Layer Watchdog . . . . . . . . . . . . . . . 13
       3.5.  Duplicate Detection. . . . . . . . . . . . . . . . . . . 19
       3.6.  Invalidation of Transport Parameter Estimates. . . . . . 20
       3.7.  Inability to use Fast Re-Transmit. . . . . . . . . . . . 21
       3.8.  Head of Line Blocking. . . . . . . . . . . . . . . . . . 22
       3.9.  Congestion Avoidance . . . . . . . . . . . . . . . . . . 23
       3.10. Premature Failover . . . . . . . . . . . . . . . . . . . 24
   4.  Security Considerations. . . . . . . . . . . . . . . . . . . . 24
   5.  IANA Considerations. . . . . . . . . . . . . . . . . . . . . . 25
   6.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 25
       6.1.  Normative References . . . . . . . . . . . . . . . . . . 25
       6.2.  Informative References . . . . . . . . . . . . . . . . . 26
   Appendix A - Detailed Watchdog Algorithm Description . . . . . . . 28
   Appendix B - AAA Agents. . . . . . . . . . . . . . . . . . . . . . 33
       B.1.  Relays and Proxies . . . . . . . . . . . . . . . . . . . 33
       B.2.  Re-directs . . . . . . . . . . . . . . . . . . . . . . . 35
       B.3.  Store and Forward Proxies. . . . . . . . . . . . . . . . 36
       B.4.  Transport Layer Proxies. . . . . . . . . . . . . . . . . 38
   Intellectual Property Statement. . . . . . . . . . . . . . . . . . 39
   Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . 39
   Author Addresses . . . . . . . . . . . . . . . . . . . . . . . . . 40
   Full Copyright Statement . . . . . . . . . . . . . . . . . . . . . 41
        
1. Introduction
1. 介绍

This document discusses transport issues that arise within protocols for Authentication, Authorization and Accounting (AAA). It also provides recommendations on the use of transport by AAA protocols. This includes usage of standards-track RFCs as well as experimental proposals.

本文档讨论身份验证、授权和记帐(AAA)协议中出现的传输问题。它还提供了使用AAA协议传输的建议。这包括使用标准跟踪RFC以及实验建议。

1.1. Requirements Language
1.1. 需求语言

In this document, the key words "MAY", "MUST, "MUST NOT", "optional", "recommended", "SHOULD", and "SHOULD NOT", are to be interpreted as described in [RFC2119].

在本文件中,关键词“可能”、“必须”、“不得”、“可选”、“建议”、“应该”和“不应该”应按照[RFC2119]中所述进行解释。

1.2. Terminology
1.2. 术语

Accounting The act of collecting information on resource usage for the purpose of trend analysis, auditing, billing, or cost allocation.

会计收集资源使用信息的行为,用于趋势分析、审计、计费或成本分配。

Administrative Domain An internet, or a collection of networks, computers, and databases under a common administration.

管理域:一个internet,或由一个共同管理的网络、计算机和数据库的集合。

Agent A AAA agent is an intermediary that communicates with AAA clients and servers. Several types of AAA agents exist, including Relays, Re-directs, and Proxies.

代理AAA代理是与AAA客户端和服务器通信的中介。存在几种类型的AAA代理,包括中继、重定向和代理。

Application-driven transport Transport behavior is said to be "application-driven" when the rate at which messages are sent is limited by the rate at which the application generates data, rather than by the size of the congestion window. In the most extreme case, the time between transactions exceeds the round-trip time between sender and receiver, implying that the application operates with an effective congestion window of one. AAA transport is typically application driven.

当消息的发送速率受应用程序生成数据的速率而不是拥塞窗口的大小限制时,应用程序驱动的传输行为被称为“应用程序驱动”。在最极端的情况下,事务之间的时间间隔超过发送方和接收方之间的往返时间,这意味着应用程序在有效拥塞窗口为1的情况下运行。AAA传输通常由应用程序驱动。

Attribute Value Pair (AVP) The variable length concatenation of a unique Attribute (represented by an integer) and a Value containing the actual value identified by the attribute.

属性-值对(AVP)唯一属性(由整数表示)和包含属性标识的实际值的值的可变长度串联。

Authentication The act of verifying a claimed identity, in the form of a pre-existing label from a mutually known name space, as the originator of a message (message authentication) or as the end-point of a channel (entity authentication).

身份验证——以相互已知的名称空间中预先存在的标签形式,验证作为消息发起人(消息身份验证)或作为通道终点(实体身份验证)的声明身份的行为。

Authorization The act of determining if a particular right, such as access to some resource, can be granted to the presenter of a particular credential.

授权确定是否可以将特定权利(如访问某些资源)授予特定凭证的演示者的行为。

Billing The act of preparing an invoice.

开票准备发票的行为。

Network Access Identifier The Network Access Identifier (NAI) is the userID submitted by the host during network access authentication. In roaming, the purpose of the NAI is to identify the user as well as to assist in the routing of the authentication request. The NAI may not necessarily be the same as the user's e-mail address or the user-ID submitted in an application layer authentication.

网络访问标识符网络访问标识符(NAI)是主机在网络访问身份验证期间提交的用户ID。在漫游中,NAI的目的是识别用户以及协助认证请求的路由。NAI不一定与用户的电子邮件地址或在应用层认证中提交的用户ID相同。

Network Access Server (NAS) A Network Access Server (NAS) is a device that hosts connect to in order to get access to the network.

网络访问服务器(NAS)网络访问服务器(NAS)是主机连接以访问网络的设备。

Proxy In addition to forwarding requests and responses, proxies enforce policies relating to resource usage and provisioning. This is typically accomplished by tracking the state of NAS devices. While proxies typically do not respond to client Requests prior to receiving a Response from the server, they may originate Reject messages in cases where policies are violated. As a result, proxies need to understand the semantics of the messages passing through them, and may not support all extensions.

代理除了转发请求和响应外,代理还强制执行与资源使用和资源调配相关的策略。这通常通过跟踪NAS设备的状态来实现。虽然代理通常在从服务器接收响应之前不响应客户端请求,但在违反策略的情况下,它们可能会发出拒绝消息。因此,代理需要理解通过它们传递的消息的语义,并且可能不支持所有扩展。

Local Proxy A Local Proxy is a proxy that exists within the same administrative domain as the network device (e.g. NAS) that issued the AAA request. Typically a local proxy is used to multiplex AAA messages to and from a large number of network devices, and may implement policy.

本地代理本地代理是与发出AAA请求的网络设备(如NAS)位于同一管理域内的代理。通常,本地代理用于在大量网络设备之间多路传输AAA消息,并且可以实现策略。

Store and forward proxy Store and forward proxies distinguish themselves from other proxy species by sending a reply to the NAS prior to proxying the request to the server. As a result, store and forward proxies need to implement AAA client and server functionality for the messages that they handle. Store and Forward proxies also typically keep state on conversations in progress in order to assure delivery of proxied Requests and Responses. While store and forward proxies are most frequently deployed for accounting, they also can be used to implement authentication/authorization policy.

存储和转发代理存储和转发代理通过在将请求代理到服务器之前向NAS发送回复,将自己与其他代理种类区分开来。因此,存储和转发代理需要为其处理的消息实现AAA客户端和服务器功能。存储和转发代理通常还保持正在进行的对话的状态,以确保代理请求和响应的交付。虽然存储和转发代理最常用于记帐,但它们也可用于实现身份验证/授权策略。

Network-driven transport Transport behavior is said to be "network driven" when the rate at which messages are sent is limited by the congestion window, not by the rate at which the application can generate data. File transfer is an example of an application where transport is network driven.

当消息的发送速率受到拥塞窗口的限制,而不是应用程序生成数据的速率的限制时,网络驱动的传输行为被称为“网络驱动”。文件传输是网络驱动传输的应用程序的一个示例。

Re-direct Rather than forwarding Requests and Responses between clients and servers, Re-directs refer clients to servers and allow them to communicate directly. Since Re-directs do not sit in the forwarding path, they do not alter any AVPs transitting between client and server. Re-directs do not originate messages and are capable of handling any message type. A Re-direct may be configured only to re-direct messages of certain types, while acting as a Relay

重新定向,而不是在客户端和服务器之间转发请求和响应,将客户端重定向到服务器并允许它们直接通信。由于重定向不位于转发路径中,因此它们不会改变客户端和服务器之间的任何AVP传输。重定向不会产生消息,并且能够处理任何消息类型。重新定向可配置为仅重新定向某些类型的消息,同时充当中继

or Proxy for other types. As with Relays, re-directs do not keep state with respect to conversations or NAS resources.

或其他类型的代理。与中继一样,重定向不会保持会话或NAS资源的状态。

Relay Relays forward requests and responses based on routing-related AVPs and domain forwarding table entries. Since relays do not enforce policies, they do not examine or alter non-routing AVPs. As a result, relays never originate messages, do not need to understand the semantics of messages or non-routing AVPs, and are capable of handling any extension or message type. Since relays make decisions based on information in routing AVPs and domain forwarding tables they do not keep state on NAS resource usage or conversations in progress.

中继根据路由相关的AVP和域转发表条目转发请求和响应。由于中继不强制执行策略,因此它们不会检查或更改非路由AVP。因此,中继从不发起消息,不需要理解消息或非路由AVP的语义,并且能够处理任何扩展或消息类型。由于中继根据路由AVP和域转发表中的信息做出决策,因此它们不会保持NAS资源使用或正在进行的对话的状态。

2. Issues in AAA Transport Usage
2. AAA传输使用中的问题

Issues that arise in AAA transport usage include:

AAA传输使用中出现的问题包括:

Application-driven versus network-driven Slow failover Use of Nagle Algorithm Multiple connections Duplicate detection Invalidation of transport parameter estimates Inability to use fast re-transmit Congestion avoidance Delayed acknowledgments Premature Failover Head of line blocking Connection load balancing

应用程序驱动与网络驱动慢速故障切换使用Nagle算法多个连接重复检测传输参数无效估计无法使用快速重新传输拥塞避免延迟确认过早故障切换线路阻塞连接负载平衡

We discuss each of these issues in turn.

我们依次讨论这些问题。

2.1. Application-driven versus Network-driven
2.1. 应用驱动与网络驱动

AAA transport behavior is typically application rather than network driven. This means that the rate at which messages are sent is typically limited by how quickly they are generated by the application, rather than by the size of the congestion window.

AAA传输行为通常是应用程序驱动的,而不是网络驱动的。这意味着消息的发送速率通常受应用程序生成消息的速度限制,而不是受拥塞窗口的大小限制。

For example, let us assume a 48-port NAS with an average session time of 20 minutes. This device will, on average, send only 144 authentication/authorization requests/hour, and an equivalent number of accounting requests. This represents an average inter-packet spacing of 25 seconds, which is much larger than the Round Trip Time (RTT) in most networks.

例如,假设一个48端口的NAS,平均会话时间为20分钟。该设备平均每小时仅发送144个身份验证/授权请求,以及同等数量的记帐请求。这表示平均数据包间隔为25秒,远大于大多数网络中的往返时间(RTT)。

Even on much larger NAS devices, the inter-packet spacing is often larger than the RTT. For example, consider a 2048-port NAS with an average session time of 10 minutes. It will on average send 3.4 authentication/authorization requests/second, and an equivalent number of accounting requests. This translates to an average inter-packet spacing of 293 ms.

即使在更大的NAS设备上,数据包之间的间隔也往往大于RTT。例如,考虑2048端口NAS,平均会话时间为10分钟。它平均每秒发送3.4个身份验证/授权请求,以及相等数量的记帐请求。这意味着平均数据包间隔为293毫秒。

However, even where transport behavior is largely application-driven, periods of network-driven behavior can occur. For example, after a NAS reboot, previously stored accounting records may be sent to the accounting server in rapid succession. Similarly, after recovery from a power failure, users may respond with a large number of simultaneous logins. In both cases, AAA messages may be generated more quickly than the network will allow them to be sent, and a queue will build up.

然而,即使在传输行为主要由应用程序驱动的地方,也可能出现网络驱动的行为。例如,NAS重新启动后,以前存储的记帐记录可能会连续快速发送到记帐服务器。类似地,在从电源故障恢复后,用户可能会同时进行大量登录。在这两种情况下,AAA消息的生成速度都可能快于网络允许发送的速度,并且会形成一个队列。

Network congestion can occur when transport behavior is network-driven or application-driven. For example, while a single NAS may not send substantial AAA traffic, many NASes may communicate with a single AAA proxy or server. As a result, routers close to a heavily loaded proxy or server may experience congestion, even though traffic from each individual NAS is light. Such "convergent congestion" can result in dropped packets in routers near the AAA server, or even within the AAA server itself.

当传输行为由网络驱动或应用程序驱动时,可能会发生网络拥塞。例如,虽然单个NAS可能不会发送大量AAA流量,但许多NAS可能会与单个AAA代理或服务器通信。因此,靠近负载沉重的代理或服务器的路由器可能会遇到拥塞,即使来自每个NAS的流量很小。这种“汇聚拥塞”可能导致AAA服务器附近的路由器中丢包,甚至AAA服务器本身中丢包。

Let us consider what happens when 10,000 48-ports NASes, each with an average session time of 20 minutes, are configured with the same AAA agent or server. The unfortunate proxy or server would receive 400 authentication/authorization requests/second and an equivalent number of accounting requests. For 1000 octet requests, this would generate 6.4 Mbps of incoming traffic at the AAA agent or server.

让我们考虑当10000个48个端口NACE(每个会话时间平均为20分钟)都配置了相同的AAA代理或服务器时会发生什么。不幸的代理或服务器每秒将收到400个身份验证/授权请求和相等数量的记帐请求。对于1000个八位组请求,这将在AAA代理或服务器上生成6.4 Mbps的传入流量。

While this transaction load is within the capabilities of the fastest AAA agents and servers, implementations exist that cannot handle such a high load. Thus high queuing delays and/or dropped packets may be experienced at the agent or server, even if routers on the path are not congested. Thus, a well designed AAA protocol needs to be able to handle congestion occurring at the AAA server, as well as congestion experienced within the network.

虽然此事务负载在最快的AAA代理和服务器的能力范围内,但存在无法处理如此高负载的实现。因此,即使路径上的路由器不拥挤,代理或服务器也可能经历高排队延迟和/或丢弃的数据包。因此,设计良好的AAA协议需要能够处理AAA服务器上发生的拥塞以及网络中经历的拥塞。

2.2. Slow Failover
2.2. 慢速故障切换

Where TCP [RFC793] is used as the transport, AAA implementations will experience very slow fail over times if they wait until a TCP connection times out before resending on another connection. This is not an issue for SCTP [RFC2960], which supports endpoint and path failure detection. As described in section 8 of [RFC2960], when the number of retransmissions exceeds the maximum

在使用TCP[RFC793]作为传输的情况下,如果AAA实现在另一个连接上重新发送之前等待TCP连接超时,则会经历非常缓慢的故障转移时间。这对于支持端点和路径故障检测的SCTP[RFC2960]来说不是问题。如[RFC2960]第8节所述,当重传次数超过最大值时

("Association.Max.Retrans"), the peer endpoint is considered unreachable, the association enters the CLOSED state, and the failure is reported to the application. This enables more rapid failure detection.

(“Association.Max.Retrans”),对等端点被视为不可访问,关联进入关闭状态,并向应用程序报告失败。这可以实现更快速的故障检测。

2.3. Use of Nagle Algorithm
2.3. Nagle算法的使用

AAA protocol messages are often smaller than the maximum segment size (MSS). While exceptions occur when certificate-based authentication messages are issued or where a low path MTU is found, typically AAA protocol messages are less than 1000 octets. Therefore, when using TCP [RFC793], the total packet count and associated network overhead can be reduced by combining multiple AAA messages within a single packet.

AAA协议消息通常小于最大段大小(MSS)。当发出基于证书的身份验证消息或发现低路径MTU时,会出现异常,但通常AAA协议消息少于1000个八位字节。因此,当使用TCP[RFC793]时,通过在单个数据包中组合多个AAA消息,可以减少总数据包计数和相关网络开销。

Where AAA runs over TCP and transport behavior is network-driven, such as after a reboot when many users login simultaneously, or many stored accounting records need to be sent, the Nagle algorithm will result in "transport layer batching" of AAA messages. While this does not reduce the work required by the application in parsing packets and responding to the messages, it does reduce the number of packets processed by routers along the path. The Nagle algorithm is not used with SCTP.

如果AAA通过TCP运行,并且传输行为是由网络驱动的,例如当许多用户同时登录时重新启动,或者需要发送许多存储的记帐记录,那么Nagle算法将导致AAA消息的“传输层批处理”。虽然这不会减少应用程序解析数据包和响应消息所需的工作,但它确实减少了路由器沿路径处理的数据包数量。Nagle算法不用于SCTP。

Where AAA transport is application-driven, the NAS will typically receive a reply from the home server prior to having another request to send. This implies, for example, that accounting requests will typically be sent individually rather than being batched by the transport layer. As a result, within the application-driven regime, the Nagle algorithm [RFC896] is ineffective.

在AAA传输由应用程序驱动的情况下,NAS通常会在收到另一个发送请求之前收到来自家庭服务器的回复。例如,这意味着记帐请求通常将单独发送,而不是由传输层进行批处理。因此,在应用程序驱动的情况下,Nagle算法[RFC896]是无效的。

2.4. Multiple Connections
2.4. 多重连接

Since the RADIUS [RFC2865] Identifier field is a single octet, a maximum of 256 requests can be in progress between two endpoints described by a 5-tuple: (Client IP address, Client port, UDP, Server IP address, Server port). In order to get around this limitation, RADIUS clients have utilized more than one sending port, sometimes even going to the extreme of using a different UDP source port for each NAS port.

由于RADIUS[RFC2865]标识符字段是一个八位字节,因此在5元组(客户端IP地址、客户端端口、UDP、服务器IP地址、服务器端口)描述的两个端点之间最多可以进行256个请求。为了绕过这一限制,RADIUS客户端使用了多个发送端口,有时甚至极端地为每个NAS端口使用不同的UDP源端口。

Were this behavior to be extended to AAA protocols operating over reliable transport, the result would be multiplication of the effective slow-start ramp-up by the number of connections. For example, if a AAA client had ten connections open to a AAA agent, and used a per-connection initial window [RFC3390] of 2, then the

如果将这种行为扩展到在可靠传输上运行的AAA协议,结果将是有效的慢启动爬升乘以连接数。例如,如果一个AAA客户端有十个打开到AAA代理的连接,并且使用了2的每个连接初始窗口[RFC3390],则

effective initial window would be 20. This is inappropriate, since it would permit the AAA client to send a large burst of packets into the network.

有效初始窗口为20。这是不合适的,因为它将允许AAA客户端向网络发送大量数据包。

2.5. Duplicate Detection
2.5. 重复检测

Where a AAA client maintains connections to multiple AAA agents or servers, and where failover/failback or connection load balancing is supported, it is possible for multiple agents or servers to receive duplicate copies of the same transaction. A transaction may be sent on another connection before expiration of the "time wait" interval necessary to guarantee that all packets sent on the original connection have left the network. Therefore it is conceivable that transactions sent on the alternate connection will arrive before those sent on the failed connection. As a result, AAA agents and servers MUST be prepared to handle duplicates, and MUST assume that duplicates can arrive on any connection.

如果AAA客户端维护到多个AAA代理或服务器的连接,并且支持故障切换/故障回复或连接负载平衡,则多个代理或服务器可以接收同一事务的重复副本。在保证在原始连接上发送的所有数据包已离开网络所需的“等待时间”间隔到期之前,可以在另一个连接上发送事务。因此,可以想象,在备用连接上发送的事务将在失败连接上发送的事务之前到达。因此,AAA代理和服务器必须准备好处理副本,并且必须假设副本可以到达任何连接。

For example, in billing, it is necessary to be able to weed out duplicate accounting records, based on the accounting session-id, event-timestamp and NAS identification information. Where authentication requests are always idempotent, the resultant duplicate responses from multiple servers will presumably be identical, so that little harm will result.

例如,在计费中,必须能够根据记帐会话id、事件时间戳和NAS标识信息剔除重复的记帐记录。在身份验证请求总是幂等的情况下,来自多个服务器的重复响应可能是相同的,因此不会造成什么伤害。

However, there are situations where the response to an authentication request will depend on a previously established state, such as when simultaneous usage restrictions are being enforced. In such cases, authentication requests will not be idempotent. For example, while an initial request might elicit an Accept response, a duplicate request might elicit a Reject response from another server, if the user were already presumed to be logged in, and only one simultaneous session were permitted. In these situations, the AAA client might receive both Accept and Reject responses to the same duplicate request, and the outcome will depend on which response arrives first.

但是,在某些情况下,对身份验证请求的响应将取决于先前建立的状态,例如,在强制实施同时使用限制时。在这种情况下,身份验证请求将不是幂等的。例如,如果假定用户已登录,并且只允许同时进行一次会话,则初始请求可能会引发接受响应,而重复请求可能会引发另一台服务器的拒绝响应。在这些情况下,AAA客户端可能会收到对同一重复请求的接受和拒绝响应,结果将取决于哪个响应首先到达。

2.6. Invalidation of Transport Parameter Estimates
2.6. 输运参数估计的无效性

Congestion control principles [Congest],[RFC2914] require the ability of a transport protocol to respond effectively to congestion, as sensed via increasing delays, packet loss, or explicit congestion notification.

拥塞控制原则[拥塞],[RFC2914]要求传输协议能够有效响应拥塞,如通过增加延迟、数据包丢失或显式拥塞通知感知到的。

With network-driven applications, it is possible to respond to congestion on a timescale comparable to the round-trip time (RTT).

对于网络驱动的应用程序,可以在与往返时间(RTT)相当的时间尺度上响应拥塞。

However, with AAA protocols, the time between sends may be longer than the RTT, so that the network conditions can not be assumed to

然而,使用AAA协议,发送之间的时间可能比RTT长,因此不能假定网络条件是稳定的

persist between sends. For example, the congestion window may grow during a period in which congestion is being experienced because few packets are sent, limiting the opportunity for feedback. Similarly, after congestion is detected, the congestion window may remain small, even though the network conditions that existed at the time of congestion no longer apply by the time when the next packets are sent. In addition, due to the low sampling interval, estimates of RTT and RTO made via the procedure described in [RFC2988] may become invalid.

在发送之间保持。例如,由于发送的数据包很少,拥塞窗口可能在经历拥塞的时期内增长,从而限制了反馈的机会。类似地,在检测到拥塞之后,拥塞窗口可以保持小,即使在发送下一个分组时拥塞时存在的网络条件不再适用。此外,由于采样间隔较低,通过[RFC2988]中描述的程序进行的RTT和RTO估计可能无效。

2.7. Inability to Use Fast Re-transmit
2.7. 无法使用快速重新传输

When congestion window validation [RFC2861] is implemented, the result is that AAA protocols operate much of the time in slow-start with an initial congestion window set to 1 or 2, depending on the implementation [RFC3390]. This implies that AAA protocols gain little benefit from the windowing features of reliable transport.

当实施拥塞窗口验证[RFC2861]时,结果是AAA协议在大部分时间内运行缓慢,初始拥塞窗口设置为1或2,具体取决于实施[RFC3390]。这意味着AAA协议从可靠传输的窗口特性中获益甚微。

Since the congestion window is so small, it is generally not possible to receive enough duplicate ACKs (3) to trigger fast re-transmit. In addition, since AAA traffic is two-way, ACKs including data will not count as part of the duplicate ACKs necessary to trigger fast re-transmit. As a result, dropped packets will require a retransmission timeout (RTO).

由于拥塞窗口非常小,因此通常不可能接收足够的重复ack(3)来触发快速重新传输。此外,由于AAA流量是双向的,因此包括数据在内的ACK将不作为触发快速重新传输所需的重复ACK的一部分。因此,丢弃的数据包将需要重新传输超时(RTO)。

2.8. Congestion Avoidance
2.8. 拥塞避免

The law of conservation of packets [Congest] suggests that a client should not send another packet into the network until it can be reasonably sure that a packet has exited the network on the same path. In the case of a AAA client, the law suggests that it should not retransmit to the same server or choose another server until it can be reasonably sure that a packet has exited the network on the same path. If the client advances the window as responses arrive, then the client will "self clock", adjusting its transmission rate to the available bandwidth.

数据包守恒定律[拥塞]表明,客户机不应向网络发送另一个数据包,直到它能够合理地确定数据包已在同一路径上退出网络。在AAA客户端的情况下,法律建议它不应重新传输到同一台服务器或选择另一台服务器,直到它可以合理地确定数据包已在同一路径上退出网络。如果客户机在响应到达时提前窗口,则客户机将“自时钟”,根据可用带宽调整其传输速率。

While a AAA client using a reliable transport such as TCP [RFC793] or SCTP [RFC2960] will self-clock when communicating directly with a AAA-server, end-to-end self-clocking is not assured when AAA agents are present.

虽然使用可靠传输(如TCP[RFC793]或SCTP[RFC2960])的AAA客户端在直接与AAA服务器通信时会自动时钟,但当存在AAA代理时,无法确保端到端自动时钟。

As described in the Appendix, AAA agents include Relays, Proxies, Re-directs, Store and Forward proxies, and Transport proxies. Of these agents, only Transport proxies and Re-directs provide a direct transport connection between the AAA client and server, allowing end-to-end self-clocking to occur.

如附录所述,AAA代理包括中继、代理、重定向、存储和转发代理以及传输代理。在这些代理中,只有传输代理和重定向在AAA客户端和服务器之间提供直接传输连接,从而允许端到端自时钟。

With Relays, Proxies or Store and Forward proxies, two separate and de-coupled transport connections are used. One connection operates between the AAA client and agent, and another between the agent and server. Since the two transport connections are de-coupled, transport layer ACKs do not flow end-to-end, and self-clocking does not occur.

对于继电器、代理或存储转发代理,使用了两个独立的和去耦合的传输连接。一个连接在AAA客户端和代理之间运行,另一个连接在代理和服务器之间运行。由于两个传输连接是解耦的,因此传输层ack不会端到端地流动,并且不会发生自时钟。

For example, consider what happens when the bottleneck exists between a AAA Relay and a AAA server. Self-clocking will occur between the AAA client and AAA Relay, causing the AAA client to adjust its sending rate to the rate at which transport ACKs flow back from the AAA Relay. However, since this rate is higher than the bottleneck bandwidth, the overall system will not self-clock.

例如,考虑当AAA中继和AAA服务器之间存在瓶颈时会发生什么。AAA客户端和AAA中继之间将发生自时钟,导致AAA客户端将其发送速率调整为传输确认从AAA中继返回的速率。但是,由于该速率高于瓶颈带宽,因此整个系统将不会自动时钟。

Since there is no direct transport connection between the AAA client and AAA server, the AAA client does not have the ability to estimate end-to-end transport parameters and adjust its sending rate to the bottleneck bandwidth between the Relay and server. As a result, the incoming rate at the AAA Relay can be higher than the rate at which packets can be sent to the AAA server.

由于AAA客户端和AAA服务器之间没有直接传输连接,AAA客户端无法估计端到端传输参数并将其发送速率调整到中继和服务器之间的瓶颈带宽。结果,AAA中继处的传入速率可以高于数据包可以发送到AAA服务器的速率。

In this case, the end-to-end performance will be determined by details of the agent implementation. In general, the end-to-end transport performance in the presence of Relays, Proxies or Store and Forward proxies will always be worse in terms of delay and packet loss than if the AAA client and server were communicating directly.

在这种情况下,端到端性能将由代理实现的细节决定。通常,与AAA客户端和服务器直接通信相比,在存在中继、代理或存储转发代理的情况下,端到端传输性能在延迟和数据包丢失方面总是更差。

For example, if the agent operates with a large receive buffer, it is possible that a large queue will develop on the receiving side, since the AAA client is able to send packets to the AAA agent more rapidly than the agent can send them to the AAA server. Eventually, the buffer will overflow, causing wholesale packet loss as well as high delay.

例如,如果代理使用较大的接收缓冲区进行操作,则可能会在接收端产生较大的队列,因为AAA客户端向AAA代理发送数据包的速度要快于代理向AAA服务器发送数据包的速度。最终,缓冲区将溢出,导致大量数据包丢失和高延迟。

Methods to induce fine-grained coupling between the two transport connections are difficult to implement. One possible solution is for the AAA agent to operate with a receive buffer that is no larger than its send buffer. If this is done, "back pressure" (closing of the receive window) will cause the agent to reduce the AAA client sending rate when the agent send buffer fills. However, unless multiple connections exist between the AAA client and AAA agent, closing of the receive window will affect all traffic sent by the AAA client, even traffic destined to AAA servers where no bottleneck exists. Since multiple connections between a AAA client and agent result in multiplication of the effective slow-start ramp rate, this is not recommended. As a result, use of "back pressure" cannot enable individual AAA client-server conversations to self-clock, and this technique appears impractical for use in AAA.

在两个传输连接之间诱导细粒度耦合的方法很难实现。一种可能的解决方案是AAA代理使用不大于其发送缓冲区的接收缓冲区进行操作。如果这样做,“背压”(接收窗口关闭)将导致代理在代理发送缓冲区填充时降低AAA客户端发送速率。但是,除非AAA客户端和AAA代理之间存在多个连接,否则关闭接收窗口将影响AAA客户端发送的所有流量,甚至是发送到不存在瓶颈的AAA服务器的流量。由于AAA客户端和代理之间的多个连接会导致有效慢启动斜坡率的倍增,所以不建议这样做。因此,使用“背压”无法使单个AAA客户机-服务器对话自动计时,而且这种技术在AAA中使用似乎不切实际。

2.9. Delayed Acknowledgments
2.9. 延迟确认

As described in Appendix B, ACKs may comprise as much as half of the traffic generated in a AAA exchange. This occurs because AAA conversations are typically application-driven, and therefore there is frequently not enough traffic to enable ACK piggybacking. As a result, AAA protocols running over TCP or SCTP transport may experience a doubling of traffic as compared with implementations utilizing UDP transport.

如附录B所述,ACK可包含AAA交换中产生的多达一半的流量。这是因为AAA会话通常是应用程序驱动的,因此通常没有足够的通信量来支持ACK-piggybacking。因此,在TCP或SCTP传输上运行的AAA协议与使用UDP传输的实现相比,流量可能会增加一倍。

It is typically not possible to address this issue via the sockets API. ACK parameters (such as the value of the delayed ACK timer) are typically fixed by TCP and SCTP implementations and are therefore not tunable by the application.

通常无法通过套接字API解决此问题。ACK参数(例如延迟ACK计时器的值)通常由TCP和SCTP实现固定,因此应用程序无法对其进行调整。

2.10. Premature Failover
2.10. 过早故障切换

RADIUS failover implementations are typically based on the concept of primary and secondary servers, in which all traffic flows to the primary server unless it is unavailable. However, the failover algorithm was not specified in [RFC2865] or [RFC2866]. As a result, RADIUS failover implementations vary in quality, with some failing over prematurely, violating the law of "conservation of packets".

RADIUS故障切换实施通常基于主服务器和辅助服务器的概念,在主服务器和辅助服务器中,除非主服务器不可用,否则所有流量都流向主服务器。但是,[RFC2865]或[RFC2866]中未指定故障切换算法。因此,RADIUS故障切换实现的质量参差不齐,有些故障会过早转移,违反了“数据包守恒”定律。

Where a Relay, Proxy or Store and Forward proxy is present, the AAA client has no direct connection to a AAA server, and is unable to estimate the end-to-end transport parameters. As a result, a AAA client awaiting an application-layer response from the server has no transport-based mechanism for determining an appropriate failover timer.

如果存在中继、代理或存储转发代理,AAA客户端与AAA服务器没有直接连接,并且无法估计端到端传输参数。因此,等待服务器的应用层响应的AAA客户端没有基于传输的机制来确定适当的故障切换计时器。

For example, if the path between the AAA agent and server includes a high delay link, or if the AAA server is very heavily loaded, it is possible that the NAS will failover to another agent while packets are still in flight. This violates the principle of "conservation of packets", since the AAA client will inject additional packets into the network before having evidence that a previously sent packet has left the network. Such behavior can result in a worse situation on an already congested link, resulting in congestive collapse [Congest].

例如,如果AAA代理和服务器之间的路径包含高延迟链路,或者如果AAA服务器负载非常重,则NAS可能会在数据包仍在传输时故障切换到另一个代理。这违反了“数据包保护”原则,因为AAA客户端将在有证据表明先前发送的数据包已离开网络之前向网络中注入额外的数据包。这种行为可能会在已经拥塞的链路上造成更糟糕的情况,导致拥塞崩溃[拥塞]。

2.11. Head of Line Blocking
2.11. 线路阻塞头

Head of line blocking occurs during periods of packet loss where the time between sends is shorter than the re-transmission timeout value (RTO). In such situations, packets back up in the send queue until

线路头阻塞发生在数据包丢失期间,其中发送之间的时间短于重新传输超时值(RTO)。在这种情况下,数据包将在发送队列中备份,直到

the lost packet can be successfully re-transmitted. This can be an issue for SCTP when using ordered delivery over a single stream, and for TCP.

丢失的数据包可以成功地重新传输。这对于SCTP和TCP来说都是一个问题,当通过单个流使用有序传递时。

Head of line blocking is typically an issue only on larger NASes. For example, a 48-port NAS with an average inter-packet spacing of 25 seconds is unlikely to have an RTO greater than this, unless severe packet loss has been experienced. However, a 2048-port NAS with an average inter-packet spacing of 293 ms may experience head-of-line blocking since the inter-packet spacing is less than the minimum RTO value of 1 second [RFC2988].

通常情况下,只有在较大的NASE上才会出现前端阻塞问题。例如,平均数据包间隔为25秒的48端口NAS的RTO不太可能大于此值,除非经历了严重的数据包丢失。然而,平均分组间间隔为293 ms的2048端口NAS可能会遇到行首阻塞,因为分组间间隔小于1秒的最小RTO值[RFC2988]。

2.12. Connection Load Balancing
2.12. 连接负载平衡

In order to lessen queuing delays and address head of line blocking, a AAA implementation may wish to load balance between connections to multiple destinations. While it is possible to employ dynamic load balancing techniques, this level of sophistication may not be required. In many situations, adequate reliability and load balancing can be achieved via static load balancing, where traffic is distributed between destinations based on static "weights".

为了减少排队延迟和解决线路阻塞,AAA实现可能希望在到多个目的地的连接之间实现负载平衡。虽然可以采用动态负载平衡技术,但可能不需要这种复杂程度。在许多情况下,通过静态负载平衡可以实现足够的可靠性和负载平衡,其中流量基于静态“权重”分布在目的地之间。

3. AAA Transport Profile
3. AAA传输配置文件

In order to address AAA transport issues, it is recommended that AAA protocols make use of standards track as well as experimental techniques. More details are provided in the sections that follow.

为了解决AAA传输问题,建议AAA协议使用标准跟踪和实验技术。下文各节将提供更多详细信息。

3.1. Transport Mappings
3.1. 传输映射

AAA Servers MUST support TCP and SCTP. AAA clients SHOULD support SCTP, but MUST support TCP if SCTP is not available. As support for SCTP improves, it is possible that SCTP support will be required on clients at some point in the future. AAA agents inherit all the obligations of Servers with respect to transport support.

AAA服务器必须支持TCP和SCTP。AAA客户端应支持SCTP,但如果SCTP不可用,则必须支持TCP。随着对SCTP支持的改进,在将来的某个时候,客户端可能需要SCTP支持。AAA代理继承服务器在传输支持方面的所有义务。

3.2. Use of Nagle Algorithm
3.2. Nagle算法的使用

While AAA protocols typically operate in the application-driven regime, there are circumstances in which they are network driven. For example, where an NAS reboots, or where connectivity is restored between an NAS and a AAA agent, it is possible that multiple packets will be available for sending.

虽然AAA协议通常在应用程序驱动的机制下运行,但在某些情况下它们是网络驱动的。例如,如果NAS重新启动,或者NAS和AAA代理之间的连接恢复,则可能会有多个数据包可用于发送。

As a result, there are circumstances where the transport-layer batching provided by the Nagle Algorithm (12) is useful, and as a result, AAA implementations running over TCP MUST enable the Nagle algorithm, [RFC896]. The Nagle algorithm is not used with SCTP.

因此,在某些情况下,Nagle算法(12)提供的传输层批处理是有用的,因此,通过TCP运行的AAA实现必须启用Nagle算法[RFC896]。Nagle算法不用于SCTP。

3.3. Multiple Connections
3.3. 多重连接

AAA protocols SHOULD use only a single persistent connection between a AAA client and a AAA agent or server. They SHOULD provide for pipelining of requests, so that more than one request can be in progress at a time. In order to minimize use of inactive connections in roaming situations, a AAA client or agent MAY bring down a connection to a AAA agent or server if the connection has been unutilized (discounting the watchdog) for a certain period of time, which MUST NOT be less than BRINGDOWN_INTERVAL (5 minutes).

AAA协议应仅在AAA客户端和AAA代理或服务器之间使用单个持久连接。它们应该提供请求的管道,以便一次可以处理多个请求。为了最大限度地减少漫游情况下非活动连接的使用,AAA客户端或代理可能会关闭到AAA代理或服务器的连接,前提是该连接在一定时间段内未被使用(不包括看门狗),该时间段不得小于BRINGDOWN_间隔(5分钟)。

While a AAA client/agent SHOULD only use a single persistent connection to a given AAA agent or server, it MAY have connections to multiple AAA agents or servers. A AAA client/agent connected to multiple agents/servers can treat them as primary/secondary or balance load between them.

虽然AAA客户端/代理应仅使用到给定AAA代理或服务器的单个持久连接,但它可能具有到多个AAA代理或服务器的连接。连接到多个代理/服务器的AAA客户机/代理可以将它们视为主/辅助或平衡它们之间的负载。

3.4. Application Layer Watchdog
3.4. 应用层看门狗

In order to enable AAA implementations to more quickly detect transport and application-layer failures, AAA protocols MUST support an application layer watchdog message.

为了使AAA实现能够更快地检测传输层和应用层故障,AAA协议必须支持应用层看门狗消息。

The application layer watchdog message enables failover from a peer that has failed, either because it is unreachable or because its applications functions have failed. This is distinct from the purpose of the SCTP heartbeat, which is to enable failover between interfaces. The SCTP heartbeat may enable a failover to another path to reach the same server, but does not address the situation where the server system or the application service has failed. Therefore both mechanisms MAY be used together.

application layer watchdog消息允许从失败的对等方进行故障切换,原因可能是无法访问该对等方或其应用程序功能失败。这与SCTP心跳的目的不同,后者是为了实现接口之间的故障切换。SCTP heartbeat可以使故障转移到另一条路径以到达同一服务器,但不能解决服务器系统或应用程序服务出现故障的情况。因此,这两种机制可以一起使用。

The watchdog is used in order to enable a AAA client or agent to determine when to resend on another connection. It operates on all open connections and is used to suspend and eventually close connections that are experiencing difficulties. The watchdog is also used to re-open and validate connections that have returned to health. The watchdog may be utilized either within primary/secondary or load balancing configurations. However, it is not intended as a cluster heartbeat mechanism.

监视程序用于使AAA客户端或代理能够确定何时在另一个连接上重新发送。它在所有打开的连接上运行,用于挂起并最终关闭遇到困难的连接。看门狗还用于重新打开和验证已恢复正常的连接。看门狗可以在主/辅助或负载平衡配置中使用。但是,它并不打算用作集群心跳机制。

The application layer watchdog is designed to detect failures of the immediate peer, and not to be affected by failures of downstream proxies or servers. This prevents instability in downstream AAA components from propagating upstream. While the receipt of any AAA Response from a peer is taken as evidence that the peer is up, lack of a Response is insufficient to conclude that the peer is down. Since the lack of Response may be the result of problems with a

应用层看门狗用于检测直接对等的故障,而不受下游代理或服务器故障的影响。这可防止下游AAA组件中的不稳定性向上游传播。虽然从对等方收到的任何AAA响应被视为对等方已启动的证据,但缺少响应不足以得出对等方已关闭的结论。因为缺乏响应可能是由于

downstream proxy or server, only after failure to respond to the watchdog message can it be determined that the peer is down.

下游代理或服务器,只有在未能响应看门狗消息后,才能确定对等方已停机。

Since the watchdog algorithm takes any AAA Response into account in determining peer liveness, decreases in the watchdog timer interval do not significantly increase the level of watchdog traffic on heavily loaded networks. This is because watchdog messages do not need to be sent where other AAA Response traffic serves as a constant reminder of peer liveness. Watchdog traffic only increases when AAA traffic is light, and therefore a AAA Response "signal" is not present. Nevertheless, decreasing the timer interval TWINIT does increase the probability of false failover significantly, and so this decision should be made with care.

由于看门狗算法在确定对等活跃度时考虑了任何AAA响应,因此看门狗定时器间隔的减少不会显著增加重负载网络上的看门狗流量水平。这是因为在其他AAA响应流量作为对等活跃性的持续提醒的情况下,不需要发送看门狗消息。看门狗流量仅在AAA流量较小时增加,因此AAA响应“信号”不存在。尽管如此,减少计时器间隔TWINIT确实会显著增加错误故障切换的概率,因此应谨慎做出此决定。

3.4.1. Algorithm Overview
3.4.1. 算法概述

The watchdog behavior is controlled by an algorithm defined in this section. This algorithm is appropriate for use either within primary/secondary or load balancing configurations. Implementations SHOULD implement this algorithm, which operates as follows:

看门狗行为由本节中定义的算法控制。此算法适用于在主/辅助或负载平衡配置中使用。实现应实现此算法,其操作如下:

[1] Watchdog behavior is controlled by a single timer (Tw). The initial value of Tw, prior to jittering is Twinit. The default value of Twinit is 30 seconds. This value was selected because it minimizes the probability that failover will be initiated due to a routing flap, as noted in [Paxson].

[1] 看门狗行为由一个定时器(Tw)控制。抖动前Tw的初始值为Twinit。Twinit的默认值为30秒。之所以选择此值,是因为它最大限度地降低了由于路由切换而启动故障切换的可能性,如[Paxson]中所述。

While Twinit MAY be set as low as 6 seconds (not including jitter), it MUST NOT be set lower than this. Note that setting such a low value for Twinit is likely to result in an increased probability of duplicates, as well as an increase in spurious failover and failback attempts.

虽然Twinit可以设置为低至6秒(不包括抖动),但不能设置为低于此值。请注意,将Twinit设置为如此低的值可能会导致重复的概率增加,以及虚假故障切换和回切尝试的增加。

In order to avoid synchronization behaviors that can occur with fixed timers among distributed systems, each time the watchdog interval is calculated with a jitter by using the Twinit value and randomly adding a value drawn between -2 and 2 seconds. Alternative calculations to create jitter MAY be used. These MUST be pseudo-random, generated by a PRNG seeded as per [RFC1750].

为了避免分布式系统中固定计时器可能出现的同步行为,每次使用Twinit值并随机添加-2到2秒之间的值,通过抖动计算看门狗间隔。可以使用替代计算来创建抖动。这些必须是伪随机的,由按照[RFC1750]播种的PRNG生成。

[2] When any AAA message is received, Tw is reset. This need not be a response to a watchdog request. Receiving a watchdog response from a peer constitutes activity, and Tw should be reset. If the watchdog timer expires and no watchdog response is pending, then a watchdog message is sent. On sending a watchdog request, Tw is reset.

[2] 当收到任何AAA消息时,Tw复位。这不一定是对看门狗请求的响应。接收来自对等方的看门狗响应构成活动,应重置Tw。如果看门狗定时器过期且没有看门狗响应挂起,则发送看门狗消息。发送看门狗请求时,Tw复位。

Watchdog packets are not retransmitted by the AAA protocol, since AAA protocols run over reliable transports that will handle all retransmissions internally. As a result, a watchdog request is only sent when there is no watchdog response pending.

看门狗数据包不会被AAA协议重新传输,因为AAA协议运行在可靠的传输上,这些传输将在内部处理所有重新传输。因此,只有在没有等待的看门狗响应时,才会发送看门狗请求。

[3] If the watchdog timer expires and a watchdog response is pending, then failover is initiated. In order for a AAA client or agent to perform failover procedures, it is necessary to maintain a pending message queue for a given peer. When an answer message is received, the corresponding request is removed from the queue. The Hop-by-Hop Identifier field MAY be used to match the answer with the queued request.

[3] 如果看门狗计时器过期且看门狗响应挂起,则启动故障切换。为了让AAA客户机或代理执行故障转移过程,必须为给定对等机维护一个挂起的消息队列。当收到应答消息时,相应的请求将从队列中删除。逐跳标识符字段可用于将应答与排队请求相匹配。

When failover is initiated, all messages in the queue are sent to an alternate agent, if available. Multiple identical requests or answers may be received as a result of a failover. The combination of an end-to-end identifier and the origin host MUST be used to identify duplicate messages.

启动故障转移时,队列中的所有消息都将发送到备用代理(如果可用)。故障切换可能会收到多个相同的请求或响应。必须使用端到端标识符和源主机的组合来识别重复消息。

Note that where traffic is heavy, the application layer watchdog can take as long as 2Tw to determine that a peer has gone down. For peers receiving a high volume of AAA Requests, AAA Responses will continually reset the timer, so that after a failure it will take Tw for the lack of traffic to be noticed, and for the watchdog message to be sent. Another Tw will elapse before failover is initiated.

请注意,在通信量大的地方,应用层看门狗可能需要花费2Tw的时间来确定对等机已停机。对于接收大量AAA请求的对等方,AAA响应将持续重置计时器,以便在出现故障后,需要Tw才能注意到缺少通信量,并发送看门狗消息。在启动故障转移之前,另一个Tw将消失。

On a lightly loaded network without much AAA Response traffic, the watchdog timer will typically expire without being reset, so that a watchdog response will be outstanding and failover will be initiated after only a single timer interval has expired.

在没有太多AAA响应流量的轻负载网络上,看门狗计时器通常会在不重置的情况下过期,因此看门狗响应将是未完成的,故障切换将在仅一个计时器间隔过期后启动。

[4] The client MUST NOT close the primary connection until the primary's watchdog timer has expired at least twice without a response (note that the watchdog is not sent a second time, however). Once this has occurred, the client SHOULD cause a transport reset or close to be done on the connection.

[4] 在主设备的看门狗计时器至少两次过期且没有响应之前,客户端不得关闭主连接(请注意,不会再次发送看门狗)。一旦发生这种情况,客户端应在连接上进行传输重置或关闭。

Once the primary connection has failed, subsequent requests are sent to the alternate server until the watchdog timer on the primary connection is reset.

一旦主连接失败,后续请求将发送到备用服务器,直到主连接上的看门狗计时器重置。

Suspension of the primary connection prevents flapping between primary and alternate connections, and ensures that failover behavior remains consistent. The application may not receive a response to the watchdog request message due to a connectivity problem, in which case a transport layer ACK will not have been received, or the lack of response may be due to an application

主连接的挂起可防止主连接和备用连接之间的切换,并确保故障切换行为保持一致。由于连接问题,应用程序可能无法收到对看门狗请求消息的响应,在这种情况下,将无法收到传输层ACK,或者缺少响应可能是由于应用程序

problem. Without transport layer visibility, the application is unable to tell the difference, and must behave conservatively.

问题如果没有传输层可见性,应用程序就无法区分两者之间的区别,因此必须谨慎行事。

In situations where no transport layer ACK is received on the primary connection after multiple re-transmissions, the RTO will be exponentially backed off as described in [RFC2988]. Due to Karn's algorithm as implemented in SCTP and TCP, the RTO estimator will not be reset until another ACK is received in response to a non-re-transmitted request. Thus, in cases where the problem occurs at the transport layer, after the client fails over to the alternate server, the RTO of the primary will remain at a high value unless an ACK is received on the primary connection.

在多次重新传输后主连接上未接收到传输层ACK的情况下,RTO将按[RFC2988]中所述以指数方式退出。由于在SCTP和TCP中实现的Karn算法,RTO估计器将不会重置,直到收到另一个ACK以响应未重新传输的请求。因此,在传输层出现问题的情况下,在客户端故障切换到备用服务器之后,除非在主连接上接收到ACK,否则主连接的RTO将保持在高值。

In the case where the problem occurs at the transport layer, subsequent requests sent on the primary connection will not receive the same service as was originally provided. For example, instead of failover occurring after 3 retransmissions, failover might occur without even a single retransmission if RTO has been sufficiently backed off. Of course, if the lack of a watchdog response was due to an application layer problem, then RTO will not have been backed off. However, without transport layer visibility, there is no way for the application to know this.

在传输层出现问题的情况下,在主连接上发送的后续请求将不会收到与最初提供的服务相同的服务。例如,如果RTO已被充分备份,则故障切换可能会在没有一次重新传输的情况下发生,而不是在3次重新传输后发生。当然,如果缺少看门狗响应是由于应用层问题造成的,那么RTO将不会退出。但是,如果没有传输层可见性,应用程序就无法知道这一点。

Suspending use of the primary connection until a response to a watchdog message is received guarantees that the RTO timer will have been reset before the primary connection is reused. If no response is received after the second watchdog timer expiration, then the primary connection is closed and the suspension becomes permanent.

在收到对看门狗消息的响应之前暂停主连接的使用可确保RTO计时器在重新使用主连接之前已重置。如果在第二个看门狗定时器到期后未收到响应,则主连接关闭,暂停变为永久性。

[5] While the connection is in the closed state, the AAA client MUST NOT attempt to send further watchdog messages on the connection. However, after the connection is closed, the AAA client continues to periodically attempt to reopen the connection.

[5] 当连接处于关闭状态时,AAA客户端不得尝试在连接上发送更多的看门狗消息。但是,在连接关闭后,AAA客户端会继续定期尝试重新打开连接。

The AAA client SHOULD wait for the transport layer to report connection failure before attempting again, but MAY choose to bound this wait time by the watchdog interval, Tw. If the connection is successfully opened, then the watchdog message is sent. Once three watchdog messages have been sent and responded to, the connection is returned to service, and transactions are once again sent over it. Connection validation via receipt of multiple watchdogs is not required when a connection is initially brought up -- in this case, the connection can immediately be put into service.

AAA客户端应等待传输层报告连接失败后再重试,但可以选择按看门狗间隔Tw限制此等待时间。如果连接成功打开,则发送看门狗消息。一旦发送并响应了三条看门狗消息,连接将返回到服务,事务将再次通过它发送。当连接最初启动时,不需要通过接收多个看门狗进行连接验证——在这种情况下,连接可以立即投入使用。

[6] When using SCTP as a transport, it is not necessary to disable SCTP's transport-layer heartbeats. However, if AAA implementations have access to SCTP's heartbeat parameters, they MAY chose to ensure that SCTP's heartbeat interval is longer than the AAA watchdog interval, Tw. This will ensure that alternate paths are still probed by SCTP, while the primary path has a minimum of heartbeat redundancy.

[6] 当使用SCTP作为传输时,不必禁用SCTP的传输层心跳。但是,如果AAA实现可以访问SCTP的心跳参数,它们可以选择确保SCTP的心跳间隔大于AAA看门狗间隔Tw。这将确保备用路径仍由SCTP探测,而主路径具有最小的心跳冗余。

3.4.2. Primary/Secondary Failover Support
3.4.2. 主/辅助故障切换支持

The watchdog timer MAY be integrated with primary/secondary style failover so as to provide improved reliability and basic load balancing. In order to balance load among multiple AAA servers, each AAA server is designated the primary for a portion of the clients, and designated as secondaries of varying priority for the remainder. In this way, load can be balanced among the AAA servers.

看门狗定时器可与主/辅助式故障切换集成,以提供改进的可靠性和基本负载平衡。为了在多个AAA服务器之间平衡负载,每个AAA服务器被指定为部分客户端的主服务器,并被指定为其余客户端的不同优先级的辅助服务器。通过这种方式,可以在AAA服务器之间平衡负载。

Within primary/secondary configurations, the watchdog timer operates as follows:

在主/辅配置中,看门狗定时器的操作如下:

[1] Assume that each client or agent is initially configured with a single primary agent or server, and one or more secondary connections.

[1] 假设每个客户端或代理最初配置有一个主代理或服务器,以及一个或多个辅助连接。

[2] The watchdog mechanism is used to suspend and eventually close primary connections that are experiencing difficulties. It is also used to re-open and validate connections that have returned to health.

[2] 看门狗机制用于暂停并最终关闭遇到困难的主连接。它还用于重新打开和验证已恢复正常的连接。

[3] Once a secondary is promoted to primary status, either on a temporary or permanent basis, the next server on the list of secondaries is promoted to fill the open secondary slot.

[3] 一旦辅助服务器临时或永久升级为主状态,辅助服务器列表上的下一台服务器将升级以填充打开的辅助插槽。

[4] The client or agent periodically attempts to re-open closed connections, so that it is possible that a previously closed connection can be returned to service and become eligible for use again. Implementations will typically retain a limit on the number of connections open at a time, so that once a previously closed connection is brought online again, the lowest priority secondary connection will be closed. In order to prevent periodic closing and re-opening of secondary connections, it is recommended that functioning connections remain open for a minimum of 5 minutes.

[4] 客户端或代理会定期尝试重新打开已关闭的连接,以便以前关闭的连接可以返回到服务中并重新使用。实现通常会保留一次打开的连接数限制,因此,一旦以前关闭的连接再次联机,最低优先级的辅助连接将关闭。为了防止二次连接定期关闭和重新打开,建议功能连接保持打开至少5分钟。

[5] In order to enable diagnosis of failover behavior, it is recommended that a table of failover events be kept within the MIB. These failover events SHOULD include appropriate transaction identifiers so that client and server data can be

[5] 为了能够诊断故障转移行为,建议在MIB中保留故障转移事件表。这些故障转移事件应包括适当的事务标识符,以便可以删除客户端和服务器数据

compared, providing insight into the cause of the problem (transport or application layer).

比较,提供对问题原因的深入了解(传输层或应用层)。

3.4.3. Connection Load Balancing
3.4.3. 连接负载平衡

Primary/secondary failover is capable of providing improved resilience and basic load balancing. However, it does not address TCP head of line blocking, since only a single connection is in use at a time.

主/辅助故障切换能够提供改进的恢复能力和基本负载平衡。但是,它不解决TCP线路头阻塞问题,因为一次只使用一个连接。

A AAA client or agent maintaining connections to multiple agents or servers MAY load balance between them. Establishing connections to multiple agents or servers reduces, but does not eliminate, head of line blocking issues experienced on TCP connections. This issue does not exist with SCTP connections utilizing multiple streams.

维护到多个代理或服务器的连接的AAA客户端或代理可能会在它们之间实现负载平衡。建立到多个代理或服务器的连接可以减少但不能消除TCP连接上遇到的线路阻塞问题。使用多个流的SCTP连接不存在此问题。

In connection load balancing configurations, the application watchdog operates as follows:

在连接负载平衡配置中,应用程序看门狗的操作如下:

[1] Assume that each client or agent is initially configured with connections to multiple AAA agents or servers, with one connection between a given client/agent and an agent/server.

[1] 假设每个客户机或代理最初配置有到多个AAA代理或服务器的连接,在给定的客户机/代理和代理/服务器之间有一个连接。

[2] In static load balancing, transactions are apportioned among the connections based on the total number of connections and a "weight" assigned to each connection. Pearson's hash [RFC3074] applied to the NAI [RFC2486] can be used to determine which connection will handle a given transaction. Hashing on the NAI provides highly granular load balancing, while ensuring that all traffic for a given conversation will be sent to the same agent or server. In dynamic load balancing, the value of the "weight" can vary based on conditions such as AAA server load. Such techniques, while sophisticated, are beyond the scope of this document.

[2] 在静态负载平衡中,事务根据连接总数和分配给每个连接的“权重”在连接之间分配。应用于NAI[RFC2486]的Pearson哈希[RFC3074]可用于确定哪个连接将处理给定事务。NAI上的哈希提供了高度细粒度的负载平衡,同时确保给定会话的所有流量都将发送到同一个代理或服务器。在动态负载平衡中,“权重”的值可以根据AAA服务器负载等情况而变化。这些技术虽然复杂,但超出了本文件的范围。

[3] Transactions are distributed to connections based on the total number of available connections and their weights. A change in the number of available connections forces recomputation of the hash table. In order not to cause conversations in progress to be switched to new destinations, on recomputation, a transitional period is required in which both old and new hash tables are needed in order to permit aging out of conversations in progress. Note that this requires a way to easily determine whether a Request represents a new conversation or the continuation of an existing conversation. As a result, removing and adding of connections is an expensive operation, and it is recommended that the hash table only be recomputed once a connection is closed or returned to service.

[3] 事务根据可用连接的总数及其权重分配给连接。可用连接数的更改将强制重新计算哈希表。为了不使正在进行的对话切换到新的目的地,在重新计算时,需要一个过渡期,在此过渡期内需要新旧哈希表,以便允许正在进行的对话老化。请注意,这需要一种方法来轻松确定请求是代表新对话还是代表现有对话的继续。因此,删除和添加连接是一项昂贵的操作,建议仅在连接关闭或返回服务后重新计算哈希表。

Suspended connections, although they are not used, do not force hash table reconfiguration until they are closed. Similarly, re-opened connections not accumulating sufficient watchdog responses do not force a reconfiguration until they are returned to service.

挂起的连接虽然未使用,但在关闭之前不会强制重新配置哈希表。类似地,未累积足够看门狗响应的重新打开的连接在返回服务之前不会强制重新配置。

While a connection is suspended, transactions that were to have been assigned to it are instead assigned to the next available server. While this results in a momentary imbalance, it is felt that this is a relatively small price to pay in order to reduce hash table thrashing.

当连接挂起时,原本分配给它的事务将被分配给下一个可用的服务器。虽然这会导致暂时的不平衡,但人们认为,为了减少哈希表抖动,这是一个相对较小的代价。

[4] In order to enable diagnosis of load balancing behavior, it is recommended that in addition to a table of failover events, a table of statistics be kept on each client, indexed by a AAA server. That way, the effectiveness of the load balancing algorithm can be evaluated.

[4] 为了能够诊断负载平衡行为,建议除了故障转移事件表外,还应在每个客户端上保留一个统计表,并由AAA服务器编制索引。这样,就可以评估负载平衡算法的有效性。

3.5. Duplicate Detection
3.5. 重复检测

Multiple facilities are required to enable duplicate detection. These include session identifiers as well as hop-by-hop and end-to-end message identifiers. Hop-by-hop identifiers whose value may change at each hop are not sufficient, since a AAA server may receive the same message from multiple agents. For example, a AAA client can send a request to Agent1, then failover and resend the request to Agent2; both agents forward the request to the home AAA server, with different hop-by-hop identifiers. A Session Identifier is insufficient as it does not distinguish different messages for the the same session.

需要多个设施来启用重复检测。这些包括会话标识符以及逐跳和端到端消息标识符。由于AAA服务器可能从多个代理接收相同的消息,因此其值可能在每个跃点处改变的逐跳标识符是不够的。例如,AAA客户端可以向Agent1发送请求,然后进行故障切换并将请求重新发送给Agent2;两个代理都使用不同的逐跳标识符将请求转发到家庭AAA服务器。会话标识符不足以区分同一会话的不同消息。

Proper treatment of the end-to-end message identifier ensures that AAA operations are idempotent. For example, without an end-to-end identifier, a AAA server keeping track of simultaneous logins might send an Accept in response to an initial Request, and then a Reject in response to a duplicate Request (where the user was allowed only one simultaneous login). Depending on which Response arrived first, the user might be allowed access or not.

正确处理端到端消息标识符可以确保AAA操作是幂等的。例如,如果没有端到端标识符,跟踪同时登录的AAA服务器可能会对初始请求发送接受,然后对重复请求发送拒绝(其中只允许用户同时登录一次)。根据最先到达的响应,可能允许用户访问,也可能不允许用户访问。

However, if the server were to store the end-to-end message identifier along with the simultaneous login information, then the duplicate Request (which utilizes the same end-to-end message identifier) could be identified and the correct response could be returned.

但是,如果服务器要存储端到端消息标识符以及同时登录信息,则可以识别重复请求(使用相同的端到端消息标识符),并返回正确的响应。

3.6. Invalidation of Transport Parameter Estimates
3.6. 输运参数估计的无效性

In order to address invalidation of transport parameter estimates, AAA protocol implementations MAY utilize Congestion Window Validation [RFC2861] and RTO validation when using TCP. This specification also recommends a procedure for RTO validation.

为了解决传输参数估计无效的问题,AAA协议实现在使用TCP时可以利用拥塞窗口验证[RFC2861]和RTO验证。本规范还建议RTO验证程序。

[RFC2581] and [RFC2861] both recommend that a connection go into slow-start after a period where no traffic has been sent within the RTO interval. [RFC2861] recommends only increasing the congestion window if it was full when the ACK arrived. The congestion window is reduced by half once every RTO interval if no traffic is received.

[RFC2581]和[RFC2861]都建议在RTO间隔内没有发送流量的时间段后,连接进入慢启动状态。[RFC2861]建议仅当ACK到达时拥塞窗口已满时才增加拥塞窗口。如果没有收到流量,则每RTO间隔将拥塞窗口减少一半。

When Congestion Window Validation is used, the congestion window will not build during application-driven periods, and instead will be decayed. As a result, AAA applications operating within the application-driven regime will typically run with a congestion window equal to the initial window much of the time, operating in "perpetual slowstart".

当使用拥塞窗口验证时,拥塞窗口不会在应用程序驱动期间生成,而是会衰减。因此,在应用程序驱动模式下运行的AAA应用程序通常会在相当于初始窗口的拥塞窗口中运行,并在“永久慢启动”模式下运行。

During periods in which AAA behavior is application-driven this will have no effect. Since the time between packets will be larger than RTT, AAA will operate with an effective congestion window equal to the initial window. However, during network-driven periods, the effect will be to space out sending of AAA packets. Thus instead of being able to send a large burst of packets into the network, a client will need to wait several RTTs as the congestion window builds during slow-start.

在AAA行为由应用程序驱动的期间,这不会产生任何影响。由于数据包之间的时间间隔将大于RTT,AAA将在与初始窗口相等的有效拥塞窗口下运行。然而,在网络驱动期间,其影响将是将AAA数据包的发送隔开。因此,在缓慢启动期间,当拥塞窗口建立时,客户端需要等待几次RTT,而不是向网络发送大量数据包。

For example, a client operating over TCP with an initial window of 2, with 35 AAA requests to send would take approximately 6 RTTs to send them, as the congestion window builds during slow start: 2, 3, 3, 6, 9, 12. After the backlog is cleared, the implementation will once again be application-driven and the congestion window size will decay. If the client were using SCTP, the number of RTTs needed to transmit all requests would usually be less, and would depend on the size of the requests, since SCTP tracks the progress for the opening of the congestion window by bytes, not segments.

例如,一个在TCP上运行的客户端,初始窗口为2,需要发送35个AAA请求,大约需要6个RTT来发送它们,因为拥塞窗口是在缓慢启动期间建立的:2、3、3、6、9、12。清除积压工作后,实现将再次由应用程序驱动,拥塞窗口大小将衰减。如果客户端使用SCTP,传输所有请求所需的RTT数量通常会更少,并且取决于请求的大小,因为SCTP按字节而不是段跟踪拥塞窗口打开的进度。

Note that [RFC2861] and [RFC2988] do not address the issue of RTO validation. This is also a problem, particularly when the Congestion Manager [RFC3124] is implemented. During periods of high packet loss, the RTO may be repeatedly increased via exponential back-off, and may attain a high value. Due to lack of timely feedback on RTT and RTO during application-driven periods, the high RTO estimate may persist long after the conditions that generated it have dissipated.

请注意,[RFC2861]和[RFC2988]没有解决RTO验证问题。这也是一个问题,特别是在实现拥塞管理器[RFC3124]时。在高分组丢失期间,可以通过指数退避来重复增加RTO,并且可以达到高值。由于在应用程序驱动期间缺乏对RTT和RTO的及时反馈,在产生RTO的条件消失后,高RTO估计值可能会持续很长时间。

RTO validation MAY be used to address this issue for TCP, via the following procedure:

RTO验证可用于通过以下程序解决TCP的此问题:

After the congestion window is decayed according to [RFC2861], reset the estimated RTO to 3 seconds. After the next packet comes in, re-calculate RTTavg, RTTdev, and RTO according to the method described in [RFC2581].

拥塞窗口根据[RFC2861]衰减后,将估计的RTO重置为3秒。在下一个数据包进入后,根据[RFC2581]中描述的方法重新计算RTTavg、RTTdev和RTO。

To address this issue for SCTP, AAA implementations SHOULD use SCTP heartbeats. [RFC2960] states that heartbeats should be enabled by default, with an interval of 30 seconds. If this interval proves to be too long to resolve this issue, AAA implementations MAY reduce the heartbeat interval.

为了解决SCTP的这个问题,AAA实现应该使用SCTP心跳。[RFC2960]表示默认情况下应启用心跳,间隔为30秒。如果证明此间隔太长而无法解决此问题,AAA实现可能会缩短心跳间隔。

3.7. Inability to Use Fast Re-Transmit
3.7. 无法使用快速重新传输

When Congestion Window Validation [RFC2861] is used, AAA implementations will operate with a congestion window equal to the initial window much of the time. As a result, the window size will often not be large enough to enable use of fast re-transmit for TCP. In addition, since AAA traffic is two-way, ACKs carrying data will not count towards triggering fast re-transmit. SCTP is less likely to encounter this issue, so the measures described below apply to TCP.

当使用拥塞窗口验证[RFC2861]时,AAA实现将在大多数情况下使用与初始窗口相等的拥塞窗口。因此,窗口大小通常不足以支持TCP的快速重新传输。此外,由于AAA流量是双向的,因此携带数据的ACK不会计入触发快速重新传输的费用。SCTP不太可能遇到此问题,因此下面描述的措施适用于TCP。

To address this issue, AAA implementations SHOULD support selective acknowledgement as described in [RFC2018] and [RFC2883]. AAA implementations SHOULD also implement Limited Transmit for TCP, as described in [RFC3042]. Rather than reducing the number of duplicate ACKs required for triggering fast recovery, which would increase the number of inappropriate re-transmissions, Limited Transmit enables the window size be increased, thus enabling the sending of additional packets which in turn may trigger fast re-transmit without a change to the algorithm.

为解决此问题,AAA实现应支持[RFC2018]和[RFC2883]中所述的选择性确认。AAA实现还应实现TCP的有限传输,如[RFC3042]所述。与减少触发快速恢复所需的重复ack的数量(这将增加不适当的重新传输的数量)不同,受限传输允许增加窗口大小,从而允许发送附加分组,而附加分组反过来可能触发快速重新传输,而无需更改算法。

However, if congestion window validation [RFC2861] is implemented, this proposal will only have an effect in situations where the time between packets is less than the estimated retransmission timeout (RTO). If the time between packets is greater than RTO, additional packets will typically not be available for sending so as to take advantage of the increased window size. As a result, AAA protocols will typically operate with the lowest possible congestion window size, resulting in a re-transmission timeout for every lost packet.

然而,如果实施了拥塞窗口验证[RFC2861],则该方案仅在分组之间的时间小于估计的重传超时(RTO)的情况下有效。如果分组之间的时间大于RTO,则通常不可用于发送的附加分组,以便利用增加的窗口大小。因此,AAA协议通常会以尽可能小的拥塞窗口大小运行,从而导致每个丢失的数据包的重新传输超时。

3.8. Head of Line Blocking
3.8. 线路阻塞头

TCP inherently does not provide a solution to the head-of-line blocking problem, although its effects can be lessened by implementation of Limited Transmit [RFC3042], and connection load balancing.

TCP本质上不提供线路阻塞问题的解决方案,尽管其影响可以通过实施有限传输[RFC3042]和连接负载平衡来减轻。

3.8.1. Using SCTP Streams to Prevent Head of Line Blocking
3.8.1. 使用SCTP流防止线路前端阻塞

Each AAA node SHOULD distribute its messages evenly across the range of SCTP streams that it and its peer have agreed upon. (A lost message in one stream will not cause any other streams to block.) A trivial and effective implementation of this simply increments a counter for the stream ID to send on. When the counter reaches the maximum number of streams for the association, it resets to 0.

每个AAA节点都应该在其与对等节点商定的SCTP流范围内均匀地分布其消息。(一个流中丢失的消息不会导致任何其他流阻塞。)这一简单而有效的实现只是增加流ID发送的计数器。当计数器达到关联的最大流数时,它将重置为0。

AAA peers MUST be able to accept messages on any stream. Note that streams are used *solely* to prevent head-of-the-line blocking. All identifying information is carried within the Diameter payload. Messages distributed across multiple streams may not be received in the order they are sent.

AAA对等方必须能够接受任何流上的消息。注意,流*单独*用于防止线头阻塞。所有识别信息都携带在直径有效载荷内。跨多个流分发的消息可能不会按发送顺序接收。

SCTP peers can allocate up to 65535 streams for an association. The cost for idle streams may or may not be zero, depending on the implementation, and the cost for non-idle streams is always greater than 0. So administrators may wish to limit the number of possible streams on their diameter nodes according to the resources (i.e. memory, CPU power, etc.) of a particular node.

SCTP对等方最多可以为关联分配65535个流。空闲流的成本可能为零,也可能不为零,这取决于实现,而非空闲流的成本始终大于0。因此,管理员可能希望根据特定节点的资源(即内存、CPU功率等)限制其diameter节点上可能的流的数量。

On a Diameter client, the number of streams may be determined by the maximum number of peak users on the NAS. If a stream is available per user, then this should be sufficient to prevent head-of-line blocking. On a Diameter proxy, the number of streams may be determined by the maximum number of peak sessions in progress from that proxy to each downstream AAA server.

在Diameter客户端上,流的数量可以由NAS上的最大峰值用户数确定。如果每个用户都有可用的流,那么这就足以防止行首阻塞。在Diameter代理上,流的数量可以由从该代理到每个下游AAA服务器的最大峰值会话数量来确定。

Stream IDs do not need to be preserved by relay agents. This simplifies implementation, as agents can easily handle forwarding between two associations with different numbers of streams. For example, consider the following case, where a relay server DRL forwards messages between a NAS and a home server, HMS. The NAS and DRL have agreed upon 1000 streams for their association, and DRL and HMS have agreed upon 2000 streams for their association. The following figure shows the message flow from NAS to HMS via DRL, and the stream ID assignments for each message:

流ID不需要由中继代理保留。这简化了实现,因为代理可以轻松处理具有不同数量流的两个关联之间的转发。例如,考虑下面的情况,其中中继服务器DRL在NAS和家庭服务器HMS之间转发消息。NAS和DRL已商定1000条流用于其关联,DRL和HMS已商定2000条流用于其关联。下图显示了通过DRL从NAS到HMS的消息流,以及每条消息的流ID分配:

   +------+                   +------+                   +------+
   |      |                   |      |                   |      |
   | NAS  |    --------->     | DRL  |     --------->    | HMS  |
   |      |                   |      |                   |      |
   +------+   1000 streams    +------+    2000 streams   +------+
        
   +------+                   +------+                   +------+
   |      |                   |      |                   |      |
   | NAS  |    --------->     | DRL  |     --------->    | HMS  |
   |      |                   |      |                   |      |
   +------+   1000 streams    +------+    2000 streams   +------+
        
              msg 1: str id 0             msg 1: str id 0
              msg 2: str id 1             msg 2: str id 1
              ...
              msg 1000: str id 999        msg 1000: str id 999
              msg 1001: str id 0          msg 1001: str id 1000
        
              msg 1: str id 0             msg 1: str id 0
              msg 2: str id 1             msg 2: str id 1
              ...
              msg 1000: str id 999        msg 1000: str id 999
              msg 1001: str id 0          msg 1001: str id 1000
        

DRL can forward messages 1 through 1000 to HMS using the same stream ID that NAS used to send to DRL. However, since the NAS / DRL association has only 1000 streams, NAS wraps around to stream ID 0 when sending message 1001. The DRL / HMS association, on the other hand, has 2000 streams, so DRL can reassign message 1001 to stream ID 1000 when forwarding it on to HMS.

DRL可以使用NAS发送给DRL的相同流ID将消息1到1000转发给HMS。但是,由于NAS/DRL关联只有1000个流,所以在发送消息1001时,NAS会将流ID改为0。另一方面,DRL/HMS关联有2000个流,因此当将消息转发到HMS时,DRL可以将消息1001重新分配给流ID 1000。

This distribution scheme acts like a hash table. It is possible, yet unlikely, that two messages will end up in the same stream, and even less likely that there will be message loss resulting in blocking when this happens. If it does turn out to be a problem, local administrators can increase the number of streams on their nodes to improve performance.

此分发方案的作用类似于哈希表。两条消息可能(但不太可能)会在同一个流中结束,并且在发生这种情况时,消息丢失导致阻塞的可能性更小。如果确实出现问题,本地管理员可以增加节点上的流数量以提高性能。

3.9. Congestion Avoidance
3.9. 拥塞避免

In order to improve upon default timer estimates, AAA implementations MAY implement the Congestion Manager (CM) [RFC3124]. CM is an end-system module that:

为了改进默认计时器估计,AAA实现可以实现拥塞管理器(CM)[RFC3124]。CM是一个终端系统模块,它:

(i) Enables an ensemble of multiple concurrent streams from a sender destined to the same receiver and sharing the same congestion properties to perform proper congestion avoidance and control, and

(i) 使来自发送方的多个并发流能够集成到同一接收方并共享相同的拥塞属性,以执行适当的拥塞避免和控制,以及

(ii) Allows applications to easily adapt to network congestion.

(ii)允许应用程序轻松适应网络拥塞。

The CM helps integrate congestion management across all applications and transport protocols. The CM maintains congestion parameters (available aggregate and per-stream bandwidth, per-receiver round-trip times, etc.) and exports an API that enables applications to learn about network characteristics, pass information to the CM, share congestion information with each other, and schedule data transmissions.

CM帮助集成所有应用程序和传输协议的拥塞管理。CM维护拥塞参数(可用聚合和每流带宽、每接收机往返时间等)并导出API,该API使应用程序能够了解网络特性、将信息传递给CM、彼此共享拥塞信息以及调度数据传输。

The CM enables the AAA application to access transport parameters (RTTavg, RTTdev) via callbacks. RTO estimates are currently not available via the callback interface, though they probably should be. Where available, transport parameters SHOULD be used to improve upon default timer values.

CM使AAA应用程序能够通过回调访问传输参数(RTTavg、RTTdev)。RTO估计值目前无法通过回调接口获得,尽管它们可能应该是可用的。在可用的情况下,应使用传输参数改进默认计时器值。

3.10. Premature Failover
3.10. 过早故障切换

Premature failover is prevented by the watchdog functionality described above. If the next hop does not return a reply, the AAA client will send a watchdog message to it to verify liveness. If a watchdog reply is received, then the AAA client will know that the next hop server is functioning at the application layer. As a result, it is only necessary to provide terminal error messages, such as the following:

上述看门狗功能可防止过早故障切换。如果下一个跃点没有返回应答,AAA客户端将向其发送一条看门狗消息以验证活动性。如果收到看门狗回复,AAA客户端将知道下一跳服务器正在应用层运行。因此,只需提供终端错误消息,例如:

"Busy": agent/Server too busy to handle additional requests, NAS should failover all requests to another agent/server.

“忙”:代理/服务器太忙,无法处理其他请求,NAS应将所有请求故障转移到另一个代理/服务器。

"Can't Locate": agent can't locate the AAA server for the indicated realm; NAS should failover that request to another proxy.

“找不到”:代理找不到指定领域的AAA服务器;NAS应将该请求故障切换到另一个代理。

"Can't Forward": agent has tried both primary and secondary AAA servers with no response; NAS should failover the request to another agent.

“无法转发”:代理已尝试主AAA服务器和辅助AAA服务器,但没有响应;NAS应将请求故障切换到另一个代理。

Note that these messages differ in their scope. The "Busy" message tells the NAS that the agent/server is too busy for ANY request. The "Can't Locate" and "Can't Forward" messages indicate that the ultimate destination cannot be reached or isn't responding, implying per-request failover.

请注意,这些消息的范围不同。“Busy”消息告诉NAS代理/服务器太忙,无法处理任何请求。“无法定位”和“无法转发”消息表示无法到达最终目的地或没有响应,这意味着每个请求的故障切换。

4. Security Considerations
4. 安全考虑

Since AAA clients, agents and servers serve as network access gatekeepers, they are tempting targets for attackers. General security considerations concerning TCP congestion control are discussed in [RFC2581]. However, there are some additional considerations that apply to this specification.

由于AAA客户端、代理和服务器充当网络访问网关,因此它们是攻击者的诱人目标。[RFC2581]中讨论了有关TCP拥塞控制的一般安全注意事项。但是,本规范还需要考虑一些其他因素。

By enabling failover between AAA agents, this specification improves the resilience of AAA applications. However, it may also open avenues for denial of service attacks.

通过启用AAA代理之间的故障切换,此规范提高了AAA应用程序的恢复能力。然而,它也可能为拒绝服务攻击打开渠道。

The failover algorithm is driven by lack of response to AAA requests and watchdog packets. On a lightly loaded network where AAA responses would not be received prior to expiration of the watchdog

故障转移算法的驱动因素是对AAA请求和看门狗数据包的响应不足。在负载较轻的网络上,在看门狗过期之前不会收到AAA响应

timer, an attacker can swamp the network, causing watchdog packets to be dropped. This will cause the AAA client to switch to another AAA agent, where the attack can be repeated. By causing the AAA client to cycle between AAA agents, service can be denied to users desiring network access.

计时器,攻击者可以淹没网络,导致看门狗数据包被丢弃。这将导致AAA客户端切换到另一个AAA代理,在那里攻击可能会重复。通过使AAA客户端在AAA代理之间循环,可以拒绝希望访问网络的用户的服务。

Where TLS [RFC2246] is being used to provide AAA security, there will be a vulnerability to spoofed reset packets, as well as other transport layer denial of service attacks (e.g. SYN flooding). Since SCTP offers improved denial of service resilience compared with TCP, where AAA applications run over SCTP, this can be mitigated to some extent.

当TLS[RFC2246]被用于提供AAA安全性时,将存在一个针对伪造重置数据包以及其他传输层拒绝服务攻击(例如SYN洪泛)的漏洞。由于与TCP(AAA应用程序在SCTP上运行)相比,SCTP提供了更好的拒绝服务恢复能力,因此可以在一定程度上缓解这种情况。

Where IPsec [RFC2401] is used to provide security, it is important that IPsec policy require IPsec on incoming packets. In order to enable a AAA client to determine what security mechanisms are in use on an agent or server without prior knowledge, it may be tempting to initiate a connection in the clear, and then to have the AAA agent respond with IKE [RFC2409]. While this approach minimizes required client configuration, it increases the vulnerability to denial of service attack, since a connection request can now not only tie up transport resources, but also resources within the IKE implementation.

在使用IPsec[RFC2401]提供安全性的情况下,IPsec策略要求对传入数据包使用IPsec是很重要的。为了使AAA客户端能够在事先不知道的情况下确定代理或服务器上正在使用哪些安全机制,可能会尝试在clear中启动连接,然后让AAA代理使用IKE进行响应[RFC2409]。虽然这种方法将所需的客户端配置降至最低,但它增加了拒绝服务攻击的脆弱性,因为连接请求现在不仅会占用传输资源,还会占用IKE实现中的资源。

5. IANA Considerations
5. IANA考虑

This document does not create any new number spaces for IANA administration.

本文档不会为IANA管理创建任何新的数字空间。

References

工具书类

6.1. Normative References
6.1. 规范性引用文件

[RFC793] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981.

[RFC793]Postel,J.,“传输控制协议”,标准7,RFC 793,1981年9月。

[RFC896] Nagle, J., "Congestion Control in IP/TCP internetworks", RFC 896, January 1984.

[RFC896]Nagle,J.,“IP/TCP网络中的拥塞控制”,RFC896,1984年1月。

[RFC1750] Eastlake, D., Crocker, S. and J. Schiller, "Randomness Recommendations for Security", RFC 1750, December 1994.

[RFC1750]Eastlake,D.,Crocker,S.和J.Schiller,“安全性的随机性建议”,RFC 1750,1994年12月。

[RFC2018] Mathis, M., Mahdavi, J., Floyd, S. and A. Romanow, "TCP Selective Acknowledgment Options", RFC 2018, October 1996.

[RFC2018]Mathis,M.,Mahdavi,J.,Floyd,S.和A.Romanow,“TCP选择性确认选项”,RFC 2018,1996年10月。

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997.

[RFC2119]Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,1997年3月。

[RFC2486] Aboba, B. and M. Beadles, "The Network Access Identifier", RFC 2486, January 1999.

[RFC2486]Aboba,B.和M.Beadles,“网络接入标识符”,RFC 2486,1999年1月。

[RFC2581] Allman, M., Paxson, V. and W. Stevens, "TCP Congestion Control", RFC 2581, April 1999.

[RFC2581]Allman,M.,Paxson,V.和W.Stevens,“TCP拥塞控制”,RFC 25811999年4月。

[RFC2883] Floyd, S., Mahdavi, J., Mathis, M., Podolsky, M. and A. Romanow, "An Extension to the Selective Acknowledgment (SACK) Option for TCP", RFC 2883, July 2000.

[RFC2883]Floyd,S.,Mahdavi,J.,Mathis,M.,Podolsky,M.和A.Romanow,“TCP选择性确认(SACK)选项的扩展”,RFC 28832000年7月。

[RFC2960] Stewart, R., Xie, Q., Morneault, K., Sharp, C., Schwarzbauer, H., Taylor, T., Rytina, I., Kalla, M., Zhang, L. and V. Paxson, "Stream Control Transmission Protocol", RFC 2960, October 2000.

[RFC2960]Stewart,R.,Xie,Q.,Morneault,K.,Sharp,C.,Schwarzbauer,H.,Taylor,T.,Rytina,I.,Kalla,M.,Zhang,L.和V.Paxson,“流控制传输协议”,RFC 29602000年10月。

[RFC2988] Paxson, V. and M. Allman, "Computing TCP's Retransmission Timer", RFC 2988, November 2000.

[RFC2988]Paxson,V.和M.Allman,“计算TCP的重传计时器”,RFC 2988,2000年11月。

[RFC3042] Allman, M., Balakrishnan H. and S. Floyd, "Enhancing TCP's Loss Recovery Using Limited Transmit", RFC 3042, January 2001.

[RFC3042]Allman,M.,Balakrishnan H.和S.Floyd,“使用有限传输增强TCP的丢失恢复”,RFC 3042,2001年1月。

[RFC3074] Volz, B., Gonczi, S., Lemon, T. and R. Stevens, "DHC Load Balancing Algorithm", RFC 3074, February 2001.

[RFC3074]Volz,B.,Gonczi,S.,Lemon,T.和R.Stevens,“DHC负载平衡算法”,RFC 3074,2001年2月。

[RFC3124] Balakrishnan, H. and S. Seshan, "The Congestion Manager", RFC 3124, June 2001.

[RFC3124]Balakrishnan,H.和S.Seshan,“拥堵管理者”,RFC31242001年6月。

6.2. Informative References
6.2. 资料性引用

[RFC2246] Dierks, T. and C. Allen, "The TLS Protocol Version 1.0", RFC 2246, January 1999.

[RFC2246]Dierks,T.和C.Allen,“TLS协议版本1.0”,RFC2246,1999年1月。

[RFC2401] Atkinson, R. and S. Kent, "Security Architecture for the Internet Protocol", RFC 2401, November 1998.

[RFC2401]Atkinson,R.和S.Kent,“互联网协议的安全架构”,RFC 2401,1998年11月。

[RFC2409] Harkins, D. and D. Carrel, "The Internet Key Exchange (IKE)", RFC 2409, November 1998.

[RFC2409]Harkins,D.和D.Carrel,“互联网密钥交换(IKE)”,RFC 2409,1998年11月。

[RFC2607] Aboba, B. and J. Vollbrecht, "Proxy Chaining and Policy Implementation in Roaming", RFC 2607, June 1999.

[RFC2607]Aboba,B.和J.Vollbrecht,“漫游中的代理链接和策略实施”,RFC 2607,1999年6月。

[RFC2861] Handley, M., Padhye, J. and S. Floyd, "TCP Congestion Window Validation", RFC 2861, June 2000.

[RFC2861]Handley,M.,Padhye,J.和S.Floyd,“TCP拥塞窗口验证”,RFC 28612000年6月。

[RFC2865] Rigney, C., Willens, S., Rubens, A. and W. Simpson, "Remote Authentication Dial In User Service (RADIUS)", RFC 2865, June 2000.

[RFC2865]Rigney,C.,Willens,S.,Rubens,A.和W.Simpson,“远程认证拨入用户服务(RADIUS)”,RFC 28652000年6月。

[RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, June 2000.

[RFC2866]Rigney,C.,“半径会计”,RFC 28662000年6月。

[RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, RFC 2914, September 2000.

[RFC2914]Floyd,S.,“拥塞控制原则”,BCP 41,RFC 2914,2000年9月。

[RFC2975] Aboba, B., Arkko, J. and D. Harrington, "Introduction to Accounting Management", RFC 2975, June 2000.

[RFC2975]Aboba,B.,Arkko,J.和D.Harrington,“会计管理导论”,RFC 29752000年6月。

[RFC3390] Allman, M., Floyd, S. and C. Partridge, "Increasing TCP's Initial Window", RFC 3390, October 2002.

[RFC3390]奥尔曼,M.,弗洛伊德,S.和C.帕特里奇,“增加TCP的初始窗口”,RFC3390,2002年10月。

   [Congest] Jacobson, V., "Congestion Avoidance and Control", Computer
             Communication Review, vol. 18, no. 4, pp. 314-329, Aug.
             1988.  ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z
        
   [Congest] Jacobson, V., "Congestion Avoidance and Control", Computer
             Communication Review, vol. 18, no. 4, pp. 314-329, Aug.
             1988.  ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z
        

[Paxson] Paxson, V., "Measurement and Analysis of End-to-End Internet Dynamics", Ph.D. Thesis, Computer Science Division, University of California, Berkeley, April 1997.

[Paxson]Paxson,V.,“端到端互联网动态的测量和分析”,博士。论文,加利福尼亚大学计算机科学系,伯克利,1997年4月。

Appendix A - Detailed Watchdog Algorithm

附录A-详细的看门狗算法

In this Appendix, the memory control structure that contains all information regarding a specific peer is referred to as a Peer Control Block, or PCB. The PCB contains the following fields:

在本附录中,包含有关特定对等机的所有信息的内存控制结构称为对等机控制块或PCB。PCB包含以下字段:

Status: OKAY: The connection is up SUSPECT: Failover has been initiated on the connection. DOWN: Connection has been closed. REOPEN: Attempting to reopen a closed connection INITIAL: The initial state of the pcb when it is first created. The pcb has never been opened.

状态:正常:连接已启动可疑:已在连接上启动故障转移。关闭:连接已关闭。重新打开:尝试重新打开闭合连接初始:pcb首次创建时的初始状态。pcb从未被打开过。

Variables: Pending: Set to TRUE if there is an outstanding unanswered watchdog request Tw: Watchdog timer value NumDWA: Number of DWAs received during REOPEN

变量:挂起:如果存在未响应的看门狗请求Tw:watchdog timer value NumDWA:重新打开期间接收的DWA数,则设置为TRUE

Tw is the watchdog timer, measured in seconds. Every second, Tw is decremented. When it reaches 0, the OnTimerElapsed event (see below) is invoked. Pseudo-code for the algorithm is included on the following pages.

Tw是看门狗定时器,以秒为单位。Tw每秒钟递减一次。当它达到0时,将调用OnTimeRecursed事件(见下文)。算法的伪代码包含在以下页面中。

   SetWatchdog()
   {
   /*
    SetWatchdog() is called whenever it is necessary
    to reset the watchdog timer Tw.  The value of the
    watchdog timer is calculated based on the default
    initial value TWINIT and a jitter ranging from
    -2 to 2 seconds.  The default for TWINIT is 30 seconds,
    and MUST NOT be set lower than 6 seconds.
   */
       Tw=TWINIT -2.0 + 4.0 * random() ;
       SetTimer(Tw) ;
       return ;
   }
        
   SetWatchdog()
   {
   /*
    SetWatchdog() is called whenever it is necessary
    to reset the watchdog timer Tw.  The value of the
    watchdog timer is calculated based on the default
    initial value TWINIT and a jitter ranging from
    -2 to 2 seconds.  The default for TWINIT is 30 seconds,
    and MUST NOT be set lower than 6 seconds.
   */
       Tw=TWINIT -2.0 + 4.0 * random() ;
       SetTimer(Tw) ;
       return ;
   }
        
   /*
    OnReceive() is called whenever a message
    is received from the peer.  This message MAY
    be a request or an answer, and can include
    DWR and DWA messages.  Pending is assumed to
    be a global variable.
   */
   OnReceive(pcb, msgType)
        
   /*
    OnReceive() is called whenever a message
    is received from the peer.  This message MAY
    be a request or an answer, and can include
    DWR and DWA messages.  Pending is assumed to
    be a global variable.
   */
   OnReceive(pcb, msgType)
        
   {
      if (msgType == DWA) {
           Pending = FALSE;
           }
      switch (pcb->Status){
      case OKAY:
           SetWatchdog();
           break;
      case SUSPECT:
           pcb->Status = OKAY;
           Failback(pcb);
           SetWatchdog();
           break;
      case REOPEN:
           if (msgType == DWA) {
              NumDWA++;
              if (NumDWA == 3) {
                 pcb->status = OKAY;
                 Failback();
              }
           } else {
              Throwaway(received packet);
           }
           break;
      case INITIAL:
      case DOWN:
           Throwaway(received packet);
           break;
      default:
           Error("Shouldn't be here!");
           break;
      }
   }
        
   {
      if (msgType == DWA) {
           Pending = FALSE;
           }
      switch (pcb->Status){
      case OKAY:
           SetWatchdog();
           break;
      case SUSPECT:
           pcb->Status = OKAY;
           Failback(pcb);
           SetWatchdog();
           break;
      case REOPEN:
           if (msgType == DWA) {
              NumDWA++;
              if (NumDWA == 3) {
                 pcb->status = OKAY;
                 Failback();
              }
           } else {
              Throwaway(received packet);
           }
           break;
      case INITIAL:
      case DOWN:
           Throwaway(received packet);
           break;
      default:
           Error("Shouldn't be here!");
           break;
      }
   }
        
   /*
   OnTimerElapsed() is called whenever Tw reaches zero (0).
   */
   OnTimerElapsed(pcb)
   {
       switch (pcb->status){
          case OKAY:
             if (!Pending) {
                SendWatchdog(pcb);
                SetWatchdog();
                Pending = TRUE;
                break;
             }
             pcb->status = SUSPECT;
        
   /*
   OnTimerElapsed() is called whenever Tw reaches zero (0).
   */
   OnTimerElapsed(pcb)
   {
       switch (pcb->status){
          case OKAY:
             if (!Pending) {
                SendWatchdog(pcb);
                SetWatchdog();
                Pending = TRUE;
                break;
             }
             pcb->status = SUSPECT;
        
             FailOver(pcb);
             SetWatchdog();
             break ;
          case SUSPECT:
             pcb->status = DOWN;
             CloseConnection(pcb);
             SetWatchdog();
             break;
          case INITIAL:
          case DOWN:
             AttemptOpen(pcb);
             SetWatchdog();
             break;
          case REOPEN:
             if (!Pending) {
                SendWatchdog(pbc);
                SetWatchdog();
                Pending = TRUE;
                break;
             }
             if (NumDWA < 0) {
                pcb->status = DOWN;
                CloseConnection(pcb);
             } else {
                NumDWA = -1;
             }
             SetWatchdog();
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
             FailOver(pcb);
             SetWatchdog();
             break ;
          case SUSPECT:
             pcb->status = DOWN;
             CloseConnection(pcb);
             SetWatchdog();
             break;
          case INITIAL:
          case DOWN:
             AttemptOpen(pcb);
             SetWatchdog();
             break;
          case REOPEN:
             if (!Pending) {
                SendWatchdog(pbc);
                SetWatchdog();
                Pending = TRUE;
                break;
             }
             if (NumDWA < 0) {
                pcb->status = DOWN;
                CloseConnection(pcb);
             } else {
                NumDWA = -1;
             }
             SetWatchdog();
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
   /*
   OnConnectionUp() is called whenever a connection comes up
   */
   OnConnectionUp(pcb)
   {
       switch (pcb->status){
          case INITIAL:
             pcb->status = OKAY;
             SetWatchdog();
             break;
          case DOWN:
             pcb->status = REOPEN;
             NumDWA = 0;
             SendWatchdog(pcb);
        
   /*
   OnConnectionUp() is called whenever a connection comes up
   */
   OnConnectionUp(pcb)
   {
       switch (pcb->status){
          case INITIAL:
             pcb->status = OKAY;
             SetWatchdog();
             break;
          case DOWN:
             pcb->status = REOPEN;
             NumDWA = 0;
             SendWatchdog(pcb);
        
             SetWatchdog();
             Pending = TRUE;
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
             SetWatchdog();
             Pending = TRUE;
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
   /*
   OnConnectionDown() is called whenever a connection goes down
   */
   OnConnectionDown(pcb)
   {
       pcb->status = DOWN;
       CloseConnection();
       switch (pcb->status){
          case OKAY:
             Failover(pcb);
             SetWatchdog();
             break;
          case SUSPECT:
          case REOPEN:
             SetWatchdog();
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
   /*
   OnConnectionDown() is called whenever a connection goes down
   */
   OnConnectionDown(pcb)
   {
       pcb->status = DOWN;
       CloseConnection();
       switch (pcb->status){
          case OKAY:
             Failover(pcb);
             SetWatchdog();
             break;
          case SUSPECT:
          case REOPEN:
             SetWatchdog();
             break;
          default:
             error("Shouldn't be here!);
             break;
          }
   }
        
   /*  Here is the state machine equivalent to the above code:
        
   /*  Here is the state machine equivalent to the above code:
        
   STATE         Event                Actions              New State
   =====         ------               -------              ----------
   OKAY          Receive DWA          Pending = FALSE
                                      SetWatchdog()        OKAY
   OKAY          Receive non-DWA      SetWatchdog()        OKAY
   SUSPECT       Receive DWA          Pending = FALSE
                                      Failback()
                                      SetWatchdog()        OKAY
   SUSPECT       Receive non-DWA      Failback()
                                      SetWatchdog()        OKAY
   REOPEN        Receive DWA &        Pending = FALSE
                 NumDWA == 2          NumDWA++
                                      Failback()           OKAY
   REOPEN        Receive DWA &        Pending = FALSE
                 NumDWA < 2           NumDWA++             REOPEN
        
   STATE         Event                Actions              New State
   =====         ------               -------              ----------
   OKAY          Receive DWA          Pending = FALSE
                                      SetWatchdog()        OKAY
   OKAY          Receive non-DWA      SetWatchdog()        OKAY
   SUSPECT       Receive DWA          Pending = FALSE
                                      Failback()
                                      SetWatchdog()        OKAY
   SUSPECT       Receive non-DWA      Failback()
                                      SetWatchdog()        OKAY
   REOPEN        Receive DWA &        Pending = FALSE
                 NumDWA == 2          NumDWA++
                                      Failback()           OKAY
   REOPEN        Receive DWA &        Pending = FALSE
                 NumDWA < 2           NumDWA++             REOPEN
        
   STATE         Event                Actions              New State
   =====         ------               -------              ----------
   REOPEN        Receive non-DWA      Throwaway()          REOPEN
   INITIAL       Receive DWA          Pending = FALSE
                                      Throwaway()          INITIAL
   INITIAL       Receive non-DWA      Throwaway()          INITIAL
   DOWN          Receive DWA          Pending = FALSE
                                      Throwaway()          DOWN
   DOWN          Receive non-DWA      Throwaway()          DOWN
   OKAY          Timer expires &      SendWatchdog()
                 !Pending             SetWatchdog()
                                      Pending = TRUE       OKAY
   OKAY          Timer expires &      Failover()
                 Pending              SetWatchdog()        SUSPECT
   SUSPECT       Timer expires        CloseConnection()
                                      SetWatchdog()        DOWN
   INITIAL       Timer expires        AttemptOpen()
                                      SetWatchdog()        INITIAL
   DOWN          Timer expires        AttemptOpen()
                                      SetWatchdog()        DOWN
   REOPEN        Timer expires &      SendWatchdog()
                 !Pending             SetWatchdog()
                                      Pending = TRUE       REOPEN
   REOPEN        Timer expires &      CloseConnection()
                 Pending &            SetWatchdog()
                 NumDWA < 0                                DOWN
   REOPEN        Timer expires &      NumDWA = -1
                 Pending &            SetWatchdog()
                 NumDWA >= 0                               REOPEN
   INITIAL       Connection up        SetWatchdog()        OKAY
   DOWN          Connection up        NumDWA = 0
                                      SendWatchdog()
                                      SetWatchdog()
                                      Pending = TRUE       REOPEN
   OKAY          Connection down      CloseConnection()
                                      Failover()
                                      SetWatchdog()        DOWN
   SUSPECT       Connection down      CloseConnection()
                                      SetWatchdog()        DOWN
   REOPEN        Connection down      CloseConnection()
                                      SetWatchdog()        DOWN
   */
        
   STATE         Event                Actions              New State
   =====         ------               -------              ----------
   REOPEN        Receive non-DWA      Throwaway()          REOPEN
   INITIAL       Receive DWA          Pending = FALSE
                                      Throwaway()          INITIAL
   INITIAL       Receive non-DWA      Throwaway()          INITIAL
   DOWN          Receive DWA          Pending = FALSE
                                      Throwaway()          DOWN
   DOWN          Receive non-DWA      Throwaway()          DOWN
   OKAY          Timer expires &      SendWatchdog()
                 !Pending             SetWatchdog()
                                      Pending = TRUE       OKAY
   OKAY          Timer expires &      Failover()
                 Pending              SetWatchdog()        SUSPECT
   SUSPECT       Timer expires        CloseConnection()
                                      SetWatchdog()        DOWN
   INITIAL       Timer expires        AttemptOpen()
                                      SetWatchdog()        INITIAL
   DOWN          Timer expires        AttemptOpen()
                                      SetWatchdog()        DOWN
   REOPEN        Timer expires &      SendWatchdog()
                 !Pending             SetWatchdog()
                                      Pending = TRUE       REOPEN
   REOPEN        Timer expires &      CloseConnection()
                 Pending &            SetWatchdog()
                 NumDWA < 0                                DOWN
   REOPEN        Timer expires &      NumDWA = -1
                 Pending &            SetWatchdog()
                 NumDWA >= 0                               REOPEN
   INITIAL       Connection up        SetWatchdog()        OKAY
   DOWN          Connection up        NumDWA = 0
                                      SendWatchdog()
                                      SetWatchdog()
                                      Pending = TRUE       REOPEN
   OKAY          Connection down      CloseConnection()
                                      Failover()
                                      SetWatchdog()        DOWN
   SUSPECT       Connection down      CloseConnection()
                                      SetWatchdog()        DOWN
   REOPEN        Connection down      CloseConnection()
                                      SetWatchdog()        DOWN
   */
        

Appendix B - AAA Agents

附录B-AAA代理商

As described in [RFC2865] and [RFC2607], AAA agents have become popular in order to support services such as roaming and shared use networks. Such agents are used both for authentication/authorization, as well as accounting [RFC2975].

如[RFC2865]和[RFC2607]中所述,AAA代理已变得流行,以支持漫游和共享使用网络等服务。此类代理既用于身份验证/授权,也用于记帐[RFC2975]。

AAA agents include:

AAA代理包括:

Relays Proxies Re-directs Store and Forward proxies Transport layer proxies

中继代理重定向存储和转发代理传输层代理

The transport layer behavior of each of these agents is described below.

下面描述这些代理中每一个的传输层行为。

B.1 Relays and Proxies
B.1继电器和代理

While the application-layer behavior of relays and proxies are different, at the transport layer the behavior is similar. In both cases, two connections are established: one from the AAA client (NAS) to the relay/proxy, and another from the relay/proxy to the AAA server. The relay/proxy does not respond to a client request until it receives a response from the server. Since the two connections are de-coupled, the end-to-end conversation between the client and server may not self clock.

虽然中继器和代理的应用层行为不同,但在传输层的行为相似。在这两种情况下,都会建立两个连接:一个是从AAA客户端(NAS)到中继/代理,另一个是从中继/代理到AAA服务器。中继/代理在收到服务器的响应之前不会响应客户端请求。由于这两个连接是解耦的,因此客户端和服务器之间的端到端对话可能不会自动计时。

Since AAA transport is typically application-driven, there is frequently not enough traffic to enable ACK piggybacking. As a result, the Nagle algorithm is rarely triggered, and delayed ACKs may comprise nearly half the traffic. Thus AAA protocols running over reliable transport will see packet traffic nearly double that experienced with UDP transport. Since ACK parameters (such as the value of the delayed ACK timer) are typically fixed by the TCP implementation and are not tunable by the application, there is little that can be done about this.

由于AAA传输通常由应用程序驱动,因此通常没有足够的通信量来支持ACK-piggybacking。因此,Nagle算法很少被触发,延迟的ACK可能占流量的近一半。因此,在可靠传输上运行的AAA协议将看到数据包流量几乎是UDP传输的两倍。由于ACK参数(例如延迟ACK计时器的值)通常由TCP实现固定,并且应用程序无法对其进行调整,因此对此几乎没有什么可以做的。

A typical trace of a conversation between a NAS, proxy and server is shown below:

NAS、代理和服务器之间对话的典型跟踪如下所示:

   Time            NAS           Relay/Proxy           Server
   ------          ---           -----------           ------
        
   Time            NAS           Relay/Proxy           Server
   ------          ---           -----------           ------
        
   0               Request
                   ------->
   OTTnp + Tpr                     Request
                                   ------->
        
   0               Request
                   ------->
   OTTnp + Tpr                     Request
                                   ------->
        
   OTTnp + TdA                     Delayed ACK
                                   <-------
        
   OTTnp + TdA                     Delayed ACK
                                   <-------
        
   OTTnp + OTTps +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTps +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTps +
   Tpr + Tsr +                     Reply
   OTTsp + TpR                     <-------
        
   OTTnp + OTTps +
   Tpr + Tsr +                     Reply
   OTTsp + TpR                     <-------
        
   OTTnp + OTTps +
   Tpr + Tsr +                     Delayed ACK
   OTTsp + TdA                     ------->
        
   OTTnp + OTTps +
   Tpr + Tsr +                     Delayed ACK
   OTTsp + TdA                     ------->
        
   OTTnp + OTTps +
   OTTsp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   OTTnp + OTTps +
   OTTsp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Relay/Proxy)
   OTTpn = One-way trip time (Relay/Proxy to NAS)
   OTTps = One-way trip time (Relay/Proxy to Server)
   OTTsp = One-way trip time (Server to Relay/Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Relay/Proxy request processing time
   TpR   = Relay/Proxy reply processing time
   Tsr   = Server request processing time
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Relay/Proxy)
   OTTpn = One-way trip time (Relay/Proxy to NAS)
   OTTps = One-way trip time (Relay/Proxy to Server)
   OTTsp = One-way trip time (Server to Relay/Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Relay/Proxy request processing time
   TpR   = Relay/Proxy reply processing time
   Tsr   = Server request processing time
        

At time 0, the NAS sends a request to the relay/proxy. Ignoring the serialization time, the request arrives at the relay/proxy at time OTTnp, and the relay/proxy takes an additional Tpr in order to forward the request toward the home server. At time TdA after

在时间0时,NAS向中继/代理发送请求。忽略序列化时间,请求在时间OTTnp到达中继/代理,中继/代理接受额外的Tpr,以便将请求转发到家庭服务器。在TdA时间之后

receiving the request, the relay/proxy sends a delayed ACK. The delayed ACK is sent, rather than being piggybacked on the reply, as long as TdA < OTTps + OTTsp + Tpr + Tsr + TpR.

接收到请求后,中继/代理发送延迟的ACK。只要TdA<OTTps+OTTsp+Tpr+Tsr+Tpr,则发送延迟的应答,而不是在应答上进行。

Typically Tpr < TdA, so that the delayed ACK is sent after the relay/proxy forwards the request toward the server, but before the relay/proxy receives the reply from the server. However, depending on the TCP implementation on the relay/proxy and when the request is received, it is also possible for the delayed ACK to be sent prior to forwarding the request.

通常Tpr<TdA,因此延迟的ACK在中继/代理将请求转发到服务器之后,但在中继/代理接收到来自服务器的回复之前发送。但是,根据中继/代理上的TCP实现以及接收到请求时,也可能在转发请求之前发送延迟的ACK。

At time OTTnp + OTTps + Tpr, the server receives the request, and Tsr later, it generates the reply. Where Tsr < TdA, the reply will contain a piggybacked ACK. However, depending on the server responsiveness and TCP implementation, the ACK and reply may be sent separately. This can occur, for example, where a slow database or storage system must be accessed prior to sending the reply.

在OTTnp+OTTps+Tpr时,服务器接收请求,然后Tsr生成应答。当Tsr<TdA时,应答将包含一个背驮式应答。但是,根据服务器响应和TCP实现,ACK和应答可以单独发送。例如,在发送回复之前必须访问速度较慢的数据库或存储系统时,可能会发生这种情况。

At time OTTnp + OTTps + OTTsp + Tpr + Tsr the reply/ACK reaches the relay/proxy, which then takes TpR additional time to forward the reply to the NAS. At TdA after receiving the reply, the relay/proxy generates a delayed ACK. Typically TpR < TdA so that the delayed ACK is sent to the server after the relay/proxy forwards the reply to the NAS. However, depending on the circumstances and the relay/proxy TCP implementation, the delayed ACK may be sent first.

在OTTnp+OTTps+OTTsp+Tpr+Tsr时,应答/确认到达中继/代理,然后Tpr需要额外的时间将应答转发到NAS。在接收到应答后的TdA,中继/代理生成延迟ACK。通常TpR<TdA,以便在中继/代理将应答转发到NAS后,将延迟的ACK发送到服务器。然而,根据情况和中继/代理TCP实现,可以首先发送延迟的ACK。

As with a delayed ACK sent in response to a request, which may be piggybacked if the reply can be received quickly enough, piggybacking of the ACK sent in response to a reply from the server is only possible if additional request traffic is available. However, due to the high inter-packet spacings in typical AAA scenarios, this is unlikely unless the AAA protocol supports a reply ACK.

与响应请求而发送的延迟ACK一样,如果能够足够快地接收到回复,则可能会采用延迟ACK,只有在额外的请求流量可用的情况下,才能采用响应服务器回复而发送的ACK。然而,由于典型AAA场景中的高分组间隔,除非AAA协议支持应答ACK,否则这不太可能。

At time OTTnp + OTTps + OTTsp + OTTpn + Tpr + Tsr + TpR the NAS receives the reply. TdA later, a delayed ACK is generated.

在OTTnp+OTTps+OTTsp+OTTpn+Tpr+Tsr+Tpr时,NAS收到回复。TdA之后,生成延迟ACK。

B.2 Re-directs
B.2重新定向

Re-directs operate by referring a NAS to the AAA server, enabling the NAS to talk to the AAA server directly. Since a direct transport connection is established, the end-to-end connection will self-clock.

通过将NAS引用到AAA服务器来重新定向操作,从而使NAS能够直接与AAA服务器通信。由于建立了直接传输连接,端到端连接将自动时钟。

With re-directs, delayed ACKs are less frequent than with application-layer proxies since the Re-direct and Server will typically piggyback replies with ACKs.

使用重定向时,延迟的ack比使用应用层代理时要少,因为重定向和服务器通常会使用ack进行应答。

The sequence of events is as follows:

事件顺序如下:

   Time            NAS             Re-direct       Server
   ------          ---             ---------       ------
        
   Time            NAS             Re-direct       Server
   ------          ---             ---------       ------
        
   0               Request
                   ------->
   OTTnp + Tpr                     Redirect/ACK
                                   <-------
        
   0               Request
                   ------->
   OTTnp + Tpr                     Redirect/ACK
                                   <-------
        
   OTTnp + Tpr +   Request
   OTTpn + Tnr     ------->
        
   OTTnp + Tpr +   Request
   OTTpn + Tnr     ------->
        
   OTTnp + OTTpn +
   Tpr + Tsr +                                     Reply/ACK
   OTTns                                           <-------
        
   OTTnp + OTTpn +
   Tpr + Tsr +                                     Reply/ACK
   OTTns                                           <-------
        
   OTTnp + OTTpn +
   OTTns + OTTsn +
   Tpr + Tsr +      Delayed ACK
   TdA              ------->
        
   OTTnp + OTTpn +
   OTTns + OTTsn +
   Tpr + Tsr +      Delayed ACK
   TdA              ------->
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Re-direct)
   OTTpn = One-way trip time (Re-direct to NAS)
   OTTns = One-way trip time (NAS to Server)
   OTTsn = One-way trip time (Server to NAS)
   TdA   = Delayed ACK timer
   Tpr   = Re-direct processing time
   Tnr   = NAS re-direct processing time
   Tsr   = Server request processing time
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Re-direct)
   OTTpn = One-way trip time (Re-direct to NAS)
   OTTns = One-way trip time (NAS to Server)
   OTTsn = One-way trip time (Server to NAS)
   TdA   = Delayed ACK timer
   Tpr   = Re-direct processing time
   Tnr   = NAS re-direct processing time
   Tsr   = Server request processing time
        
B.3 Store and Forward Proxies
B.3存储和转发代理

With a store and forward proxy, the proxy may send a reply to the NAS prior to forwarding the request to the server. While store and forward proxies are most frequently deployed for accounting [RFC2975], they also can be used to implement authentication/authorization policy, as described in [RFC2607].

使用存储转发代理,代理可以在将请求转发到服务器之前向NAS发送回复。虽然存储和转发代理最常用于记帐[RFC2975],但它们也可用于实现身份验证/授权策略,如[RFC2607]中所述。

As noted in [RFC2975], store and forward proxies can have a negative effect on accounting reliability. By sending a reply to the NAS without receiving one from the accounting server, store and forward proxies fool the NAS into thinking that the accounting request had been accepted by the accounting server when this is not the case. As a result, the NAS can delete the accounting packet from non-volatile

如[RFC2975]所述,存储和转发代理可能会对记帐可靠性产生负面影响。通过向NAS发送回复而不从记帐服务器接收回复,存储转发代理会欺骗NAS,使其认为记帐服务器已接受了记帐请求,而事实并非如此。因此,NAS可以从非易失性数据库中删除记帐数据包

storage before it has been accepted by the accounting server. That leaves the proxy responsible for delivering accounting packets. If the proxy involves moving parts (e.g. a disk drive) while the NAS does not, overall system reliability can be reduced. As a result, store and forward proxies SHOULD NOT be used.

存储在记帐服务器接受之前。这使得代理负责传递记帐数据包。如果代理涉及移动部件(如磁盘驱动器),而NAS不涉及,则会降低整个系统的可靠性。因此,不应使用存储和转发代理。

The sequence of events is as follows:

事件顺序如下:

   Time            NAS             Proxy           Server
   ------          ---             -----           ------
        
   Time            NAS             Proxy           Server
   ------          ---             -----           ------
        
   0               Request
                   ------->
   OTTnp + TpR                     Reply/ACK
                                   <-------
        
   0               Request
                   ------->
   OTTnp + TpR                     Reply/ACK
                                   <-------
        
   OTTnp + Tpr                     Request
                                   ------->
        
   OTTnp + Tpr                     Request
                                   ------->
        
   OTTnp + OTTph +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTph +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Reply
   OTThp + TpR                     <-------
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Reply
   OTThp + TpR                     <-------
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Delayed ACK
   OTThp + TdA                     ------->
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Delayed ACK
   OTThp + TdA                     ------->
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Proxy)
   OTTpn = One-way trip time (Proxy to NAS)
   OTTph = One-way trip time (Proxy to Home server)
   OTThp = One-way trip time (Home Server to Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Proxy request processing time
   TpR   = Proxy reply processing time
   Tsr   = Server request processing time
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Proxy)
   OTTpn = One-way trip time (Proxy to NAS)
   OTTph = One-way trip time (Proxy to Home server)
   OTThp = One-way trip time (Home Server to Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Proxy request processing time
   TpR   = Proxy reply processing time
   Tsr   = Server request processing time
        
B.4 Transport Layer Proxies
B.4传输层代理

In addition to acting as proxies at the application layer, transport layer proxies forward transport ACKs between the AAA client and server. This splices together the client-proxy and proxy-server connections into a single connection that behaves as though it operates end-to-end, exhibiting self-clocking. However, since transport proxies operate at the transport layer, they cannot be implemented purely as applications and they are rarely deployed.

除了在应用层充当代理之外,传输层还代理AAA客户端和服务器之间的转发传输确认。这会将客户端代理和代理服务器连接拼接到一个连接中,该连接的行为就像是端到端操作一样,表现出自时钟功能。但是,由于传输代理在传输层运行,因此它们不能纯粹作为应用程序实现,而且很少部署。

With a transport proxy, the sequence of events is as follows:

对于传输代理,事件顺序如下所示:

   Time            NAS             Proxy           Home Server
   ------          ---             -----           -----------
        
   Time            NAS             Proxy           Home Server
   ------          ---             -----           -----------
        
   0               Request
                   ------->
   OTTnp + Tpr                     Request
                                   ------->
        
   0               Request
                   ------->
   OTTnp + Tpr                     Request
                                   ------->
        
   OTTnp + OTTph +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTph +                                 Reply/ACK
   Tpr + Tsr                                       <-------
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Reply/ACK
   OTThp + TpR                     <-------
        
   OTTnp + OTTph +
   Tpr + Tsr +                     Reply/ACK
   OTThp + TpR                     <-------
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +      Delayed ACK
   TpR + TdA        ------->
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +                     Delayed ACK
   TpR + TpD                       ------->
        
   OTTnp + OTTph +
   OTThp + OTTpn +
   Tpr + Tsr +                     Delayed ACK
   TpR + TpD                       ------->
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Proxy)
   OTTpn = One-way trip time (Proxy to NAS)
   OTTph = One-way trip time (Proxy to Home server)
   OTThp = One-way trip time (Home Server to Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Proxy request processing time
   TpR   = Proxy reply processing time
        
   Key
   ---
   OTT   = One-way Trip Time
   OTTnp = One-way trip time (NAS to Proxy)
   OTTpn = One-way trip time (Proxy to NAS)
   OTTph = One-way trip time (Proxy to Home server)
   OTThp = One-way trip time (Home Server to Proxy)
   TdA   = Delayed ACK timer
   Tpr   = Proxy request processing time
   TpR   = Proxy reply processing time
        

Tsr = Server request processing time TpD = Proxy delayed ack processing time

Tsr=服务器请求处理时间TpD=代理延迟ack处理时间

Intellectual Property Statement

知识产权声明

The IETF takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on the IETF's procedures with respect to rights in standards-track and standards-related documentation can be found in BCP-11. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementors or users of this specification can be obtained from the IETF Secretariat.

IETF对可能声称与本文件所述技术的实施或使用有关的任何知识产权或其他权利的有效性或范围,或此类权利下的任何许可可能或可能不可用的程度,不采取任何立场;它也不表示它已作出任何努力来确定任何此类权利。有关IETF在标准跟踪和标准相关文件中权利的程序信息,请参见BCP-11。可从IETF秘书处获得可供发布的权利声明副本和任何许可证保证,或本规范实施者或用户试图获得使用此类专有权利的一般许可证或许可的结果。

The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to practice this standard. Please address the information to the IETF Executive Director.

IETF邀请任何相关方提请其注意任何版权、专利或专利申请,或其他可能涉及实施本标准所需技术的专有权利。请将信息发送给IETF执行董事。

Acknowledgments

致谢

Thanks to Allison Mankin of AT&T, Barney Wolff of Databus, Steve Rich of Cisco, Randy Bush of AT&T, Bo Landarv of IP Unplugged, Jari Arkko of Ericsson, and Pat Calhoun of Blackstorm Networks for fruitful discussions relating to AAA transport.

感谢AT&T的Allison Mankin、Databus的Barney Wolff、Cisco的Steve Rich、AT&T的Randy Bush、IP Unlocked的Bo Landarv、爱立信的Jari Arkko和Blackstorm Networks的Pat Calhoun就AAA传输进行了富有成效的讨论。

Authors' Addresses

作者地址

Bernard Aboba Microsoft Corporation One Microsoft Way Redmond, WA 98052

伯纳德·阿博巴(Bernard Aboba)微软公司华盛顿州雷德蒙微软大道一号,邮编:98052

   Phone: +1 425 706 6605
   Fax:   +1 425 936 7329
   EMail: bernarda@microsoft.com
        
   Phone: +1 425 706 6605
   Fax:   +1 425 936 7329
   EMail: bernarda@microsoft.com
        

Jonathan Wood Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303

Jonathan Wood Sun Microsystems,Inc.加利福尼亚州帕洛阿尔托市圣安东尼奥路901号,邮编94303

   EMail: jonwood@speakeasy.net
        
   EMail: jonwood@speakeasy.net
        

Full Copyright Statement

完整版权声明

Copyright (C) The Internet Society (2003). All Rights Reserved.

版权所有(C)互联网协会(2003年)。版权所有。

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

本文件及其译本可复制并提供给他人,对其进行评论或解释或协助其实施的衍生作品可全部或部分编制、复制、出版和分发,不受任何限制,前提是上述版权声明和本段包含在所有此类副本和衍生作品中。但是,不得以任何方式修改本文件本身,例如删除版权通知或对互联网协会或其他互联网组织的引用,除非出于制定互联网标准的需要,在这种情况下,必须遵循互联网标准过程中定义的版权程序,或根据需要将其翻译成英语以外的其他语言。

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。

Acknowledgement

确认

Funding for the RFC Editor function is currently provided by the Internet Society.

RFC编辑功能的资金目前由互联网协会提供。