Network Working Group R. Finlayson Request for Comments: 3119 LIVE.COM Category: Standards Track June 2001
Network Working Group R. Finlayson Request for Comments: 3119 LIVE.COM Category: Standards Track June 2001
A More Loss-Tolerant RTP Payload Format for MP3 Audio
一种更具容错性的MP3音频RTP有效负载格式
Status of this Memo
本备忘录的状况
This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.
本文件规定了互联网社区的互联网标准跟踪协议,并要求进行讨论和提出改进建议。有关本协议的标准化状态和状态,请参考当前版本的“互联网官方协议标准”(STD 1)。本备忘录的分发不受限制。
Copyright Notice
版权公告
Copyright (C) The Internet Society (2001). All Rights Reserved.
版权所有(C)互联网协会(2001年)。版权所有。
Abstract
摘要
This document describes a RTP (Real-Time Protocol) payload format for transporting MPEG (Moving Picture Experts Group) 1 or 2, layer III audio (commonly known as "MP3"). This format is an alternative to that described in RFC 2250, and performs better if there is packet loss.
本文档描述了用于传输MPEG(运动图像专家组)1或2第三层音频(通常称为“MP3”)的RTP(实时协议)有效载荷格式。该格式是RFC 2250中描述的格式的替代,并且在存在数据包丢失的情况下性能更好。
While the RTP payload format defined in RFC 2250 [2] is generally applicable to all forms of MPEG audio or video, it is sub-optimal for MPEG 1 or 2, layer III audio (commonly known as "MP3"). The reason for this is that an MP3 frame is not a true "Application Data Unit" - it contains a back-pointer to data in earlier frames, and so cannot be decoded independently of these earlier frames. Because RFC 2250 defines that packet boundaries coincide with frame boundaries, it handles packet loss inefficiently when carrying MP3 data. The loss of an MP3 frame will render some data in previous (or future) frames useless, even if they are received without loss.
虽然RFC 2250[2]中定义的RTP有效载荷格式通常适用于所有形式的MPEG音频或视频,但对于MPEG 1或2第三层音频(通常称为“MP3”)而言,它是次优的。原因是MP3帧不是真正的“应用程序数据单元”——它包含指向早期帧中数据的反向指针,因此无法独立于这些早期帧进行解码。因为RFC2250定义了包边界与帧边界重合,所以在携带MP3数据时,它处理包丢失的效率很低。丢失MP3帧会使以前(或将来)帧中的某些数据变得无用,即使这些数据接收时没有丢失。
In this document we define an alternative RTP payload format for MP3 audio. This format uses a data-preserving rearrangement of the original MPEG frames, so that packet boundaries now coincide with true MP3 "Application Data Units", which can also (optionally) be rearranged in an interleaving pattern. This new format is therefore more data-efficient than RFC 2250 in the face of packet loss.
在本文档中,我们为MP3音频定义了另一种RTP有效负载格式。这种格式使用原始MPEG帧的数据保留重排,使得分组边界现在与真正的MP3“应用数据单元”重合,也可以(可选地)以交错模式重排。因此,面对数据包丢失,这种新格式比RFC2250更有效。
In this section we give a brief overview of the structure of a MP3 frame. (For more detailed description, see the MPEG 1 audio [3] and MPEG 2 audio [4] specifications.)
在本节中,我们将简要概述MP3框架的结构。(有关更多详细说明,请参阅MPEG 1音频[3]和MPEG 2音频[4]规范。)
Each MPEG audio frame begins with a 4-byte header. Information defined by this header includes:
每个MPEG音频帧都以一个4字节的头开始。此标题定义的信息包括:
- Whether the audio is MPEG 1 or MPEG 2. - Whether the audio is layer I, II, or III. (The remainder of this document assumes layer III, i.e., "MP3" frames) - Whether the audio is mono or stereo. - Whether or not there is a 2-byte CRC field following the header. - (indirectly) The size of the frame.
- 音频是MPEG 1还是MPEG 2。-音频是第一层、第二层还是第三层(本文档其余部分采用第三层,即“MP3”帧)-音频是单声道还是立体声。-标头后面是否有2字节CRC字段。-(间接)框架的大小。
The following structures appear after the header:
标题后显示以下结构:
- (optionally) A 2-byte CRC field - A "side info" structure. This has the following length: - 32 bytes for MPEG 1 stereo - 17 bytes for MPEG 1 mono, or for MPEG 2 stereo - 9 bytes for MPEG 2 mono - Encoded audio data, plus optional ancillary data (filling out the rest of the frame)
- (可选)2字节CRC字段-一种“侧信息”结构。它的长度如下:-对于MPEG 1立体声为32字节-对于MPEG 1单声道为17字节,对于MPEG 2立体声为9字节-对于MPEG 2单声道编码的音频数据,加上可选的辅助数据(填充帧的其余部分)
For the purpose of this document, the "side info" structure is the most important, because it defines the location and size of the "Application Data Unit" (ADU) that an MP3 decoder will process. In particular, the "side info" structure defines:
在本文档中,“侧信息”结构是最重要的,因为它定义了MP3解码器将处理的“应用程序数据单元”(ADU)的位置和大小。具体而言,“侧信息”结构定义:
- "main_data_begin": This is a back-pointer (in bytes) to the start of the ADU. The back-pointer is counted from the beginning of the frame, and counts only encoded audio data and any ancillary data (i.e., ignoring any header, CRC, or "side info" fields).
- “main_data_begin”:这是指向ADU开始的返回指针(以字节为单位)。后向指针从帧开始计数,并且仅计数编码音频数据和任何辅助数据(即,忽略任何头、CRC或“侧信息”字段)。
An MP3 decoder processes each ADU independently. The ADUs will generally vary in length, but their average length will, of course, be that of the of the MP3 frames (minus the length of the header, CRC, and "side info" fields). (In MPEG literature, this ADU is sometimes referred to as a "bit reservoir".)
MP3解码器独立处理每个ADU。ADU的长度通常会有所不同,但它们的平均长度当然是MP3帧的长度(减去页眉、CRC和“侧信息”字段的长度)。(在MPEG文献中,该ADU有时被称为“比特库”。)
As noted in [5], a payload format should be designed so that packet boundaries coincide with "codec frame boundaries" - i.e., with ADUs. In the RFC 2250 payload format for MPEG audio [2], each RTP packet payload contains MP3 frames. In this new payload format for MP3 audio, however, each RTP packet payload contains "ADU frames", each preceded by an "ADU descriptor".
如[5]所述,有效载荷格式的设计应确保数据包边界与“编解码器帧边界”(即ADU)一致。在用于MPEG音频[2]的RFC 2250有效载荷格式中,每个RTP包有效载荷包含MP3帧。然而,在这种新的MP3音频有效载荷格式中,每个RTP数据包有效载荷包含“ADU帧”,每个帧前面都有一个“ADU描述符”。
An "ADU frame" is defined as:
“ADU框架”定义为:
- The 4-byte MPEG header (the same as the original MP3 frame, except that the first 11 bits are (optionally) replaced by an "Interleaving Sequence Number", as described in section 6 below) - The optional 2-byte CRC field (the same as the original MP3 frame) - The "side info" structure (the same as the original MP3 frame) - The complete sequence of encoded audio data (and any ancillary data) for the ADU (i.e., running from the start of this MP3 frame's "main_data_begin" back-pointer, up to the start of the next MP3 frame's back-pointer)
- 4字节MPEG报头(与原始MP3帧相同,但前11位(可选)替换为“交错序列号”,如下文第6节所述)-可选2字节CRC字段(与原始MP3帧相同)-侧信息结构(与原始MP3帧相同)-ADU的编码音频数据(和任何辅助数据)的完整序列(即,从该MP3帧的“主数据开始”后指针开始,一直运行到下一个MP3帧的后指针开始)
Within each RTP packet payload, each "ADU frame" is preceded by a 1 or 2-byte "ADU descriptor", which gives the size of the ADU, and indicates whether or not this packet's data is a continuation of the previous packet's data. (This occurs only when a single "ADU descriptor"+"ADU frame" is too large to fit within a RTP packet.)
在每个RTP数据包有效载荷中,每个“ADU帧”前面都有一个1或2字节的“ADU描述符”,该描述符给出了ADU的大小,并指示该数据包的数据是否是前一个数据包数据的延续。(仅当单个“ADU描述符”+“ADU帧”太大而无法装入RTP数据包时,才会发生这种情况。)
An ADU descriptor consists of the following fields
ADU描述符由以下字段组成
- "C": Continuation flag (1 bit): 1 if the data following the ADU descriptor is a continuation of an ADU frame that was too large to fit within a single RTP packet; 0 otherwise. - "T": Descriptor Type flag (1 bit): 0 if this is a 1-byte ADU descriptor; 1 if this is a 2-byte ADU descriptor. - "ADU size" (6 or 14 bits): The size (in bytes) of the ADU frame that will follow this ADU descriptor (i.e., NOT including the size of the descriptor itself). A 2-byte ADU descriptor (with a 14-bit "ADU size" field) is used for ADU frames sizes of 64 bytes or more. For smaller ADU frame sizes, senders MAY alternatively
- “C”:延续标志(1位):如果ADU描述符后面的数据是ADU帧的延续,该ADU帧太大,无法容纳在单个RTP数据包中,则为1;否则为0。-“T”:描述符类型标志(1位):如果是1字节ADU描述符,则为0;1如果这是一个2字节ADU描述符。-“ADU大小”(6或14位):将跟随此ADU描述符的ADU帧的大小(字节)(即,不包括描述符本身的大小)。2字节ADU描述符(带有14位“ADU大小”字段)用于64字节或更大的ADU帧大小。对于较小的ADU帧尺寸,发送器也可以
use a 1-byte ADU descriptor (with a 6-bit "ADU size" field). Receivers MUST be able to accept an ADU descriptor of either size.
使用1字节ADU描述符(带有6位“ADU大小”字段)。接收器必须能够接受任意大小的ADU描述符。
Thus, a 1-byte ADU descriptor is formatted as follows:
因此,1字节ADU描述符的格式如下:
0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |C|0| ADU size | +-+-+-+-+-+-+-+-+
0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |C|0| ADU size | +-+-+-+-+-+-+-+-+
and a 2-byte ADU descriptor is formatted as follows:
2字节ADU描述符的格式如下:
0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |C|1| ADU size (14 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |C|1| ADU size (14 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Each RTP packet payload begins with a "ADU descriptor", followed by "ADU frame" data. Normally, this "ADU descriptor"+"ADU frame" will fit completely within the RTP packet. In this case, more than one successive "ADU descriptor"+"ADU frame" MAY be packed into a single RTP packet, provided that they all fit completely.
每个RTP数据包有效负载以“ADU描述符”开始,后跟“ADU帧”数据。通常,“ADU描述符”+“ADU帧”将完全适合RTP数据包。在这种情况下,可以将多个连续的“ADU描述符”+“ADU帧”打包到单个RTP分组中,前提是它们都完全适合。
If, however, a single "ADU descriptor"+"ADU frame" is too large to fit within an RTP packet, then the "ADU frame" is split across two or more successive RTP packets. Each such packet begins with an ADU descriptor. The first packet's descriptor has a "C" (continuation) flag of 0; the following packets' descriptors each have a "C" flag of 1. Each descriptor, in this case, has the same "ADU size" value: the size of the entire "ADU frame" (not just the portion that will fit within a single RTP packet). Each such packet (even the last one) contains only one "ADU descriptor".
然而,如果单个“ADU描述符”+“ADU帧”太大而无法容纳在RTP数据包中,则“ADU帧”被分割成两个或多个连续RTP数据包。每个这样的数据包都以ADU描述符开始。第一个数据包的描述符具有0的“C”(延续)标志;以下数据包的描述符每个都有一个1的“C”标志。在这种情况下,每个描述符具有相同的“ADU大小”值:整个“ADU帧”的大小(而不仅仅是适合单个RTP数据包的部分)。每个这样的数据包(甚至最后一个)只包含一个“ADU描述符”。
Payload Type: The (static) payload type 14 that was defined for MPEG audio [6] MUST NOT be used. Instead, a different, dynamic payload type MUST be used - i.e., one in the range [96,127].
有效负载类型:不得使用为MPEG音频[6]定义的(静态)有效负载类型14。相反,必须使用不同的动态有效负载类型,即范围[96127]内的有效负载类型。
M bit: This payload format defines no use for this bit. Senders SHOULD set this bit to zero in each outgoing packet.
M位:此有效负载格式定义此位不可用。发送方应在每个传出数据包中将该位设置为零。
Timestamp: This is a 32-bit 90 kHz timestamp, representing the presentation time of the first ADU packed within the packet.
时间戳:这是一个32位的90 kHz时间戳,表示数据包中打包的第一个ADU的显示时间。
Note that no information is lost by converting a sequence of MP3 frames to a corresponding sequence of "ADU frames", so a receiving RTP implementation can either feed the ADU frames directly to an appropriately modified MP3 decoder, or convert them back into a sequence of MP3 frames, as described in appendix A.2 below.
请注意,将MP3帧序列转换为相应的“ADU帧”序列不会丢失任何信息,因此接收RTP实现可以将ADU帧直接馈送到适当修改的MP3解码器,或者将其转换回MP3帧序列,如下面附录a.2所述。
The RTP payload format described here is intended only for MPEG 1 or 2, layer III audio ("MP3"). In contrast, layer I and layer II frames are self-contained, without a back-pointer to earlier frames. However, it is possible (although unusual) for a sequence of audio frames to consist of a mixture of layer III frames and layer I or II frames. When such a sequence is transmitted, only layer III frames are converted to ADUs; layer I or II frames are sent 'as is' (except for the prepending of an "ADU descriptor"). Similarly, the receiver of a sequence of frames - using this payload format - leaves layer I and II frames untouched (after removing the prepended "ADU descriptor), but converts layer III frames from "ADU frames" to regular MP3 frames. (Recall that each frame's layer is identified from its 4-byte MPEG header.)
此处描述的RTP有效负载格式仅适用于MPEG 1或2第三层音频(“MP3”)。相反,层I和层II帧是自包含的,没有指向早期帧的反向指针。然而,音频帧序列可能(尽管不常见)由第三层帧和第一层或第二层帧的混合组成。当发送这样的序列时,仅将第三层帧转换为adu;层I或层II帧按“原样”发送(除了“ADU描述符”的前置)。类似地,帧序列的接收器(使用此有效负载格式)保持第一层和第二层帧不变(在移除前置的“ADU描述符”后),但将第三层帧从“ADU帧”转换为常规MP3帧。(回想一下,每个帧的层是从其4字节MPEG头中识别的。)
If you are transmitting a stream consists *only* of layer I or layer II frames (i.e., without any MP3 data), then there is no benefit to using this payload format, *unless* you are using the interleaving mechanism.
如果您传输的流*仅*由第I层或第II层帧组成(即,没有任何MP3数据),则使用此有效负载格式*没有任何好处,除非*您使用的是交织机制。
The transmission of a sequence of MP3 frames takes the following steps:
MP3帧序列的传输采取以下步骤:
MP3 frames -1-> ADU frames -2-> interleaved ADU frames -3-> RTP packets
MP3 frames -1-> ADU frames -2-> interleaved ADU frames -3-> RTP packets
Step 1, the conversion of a sequence of MP3 frames to a corresponding sequence of ADU frames, takes place as described in sections 2 and 3.1 above. (Note also the pseudo-code in appendix A.1.)
步骤1,将MP3帧序列转换为相应的ADU帧序列,如上文第2节和第3.1节所述。(另请注意附录A.1中的伪代码。)
Step 2 is the reordering of the sequence of ADU frames in an (optional) interleaving pattern, prior to packetization, as described in section 6 below. (Note also the pseudo-code in appendix B.1.) Interleaving helps reduce the effect of packet loss, by distributing consecutive ADU frames over non-consecutive packets. (Note that
步骤2是在分组之前以(可选)交织模式对ADU帧序列重新排序,如下面第6节所述。(另请注意附录B.1中的伪代码。)通过在非连续数据包上分布连续ADU帧,交织有助于减少数据包丢失的影响。(注意
because of the back-pointer in MP3 frames, interleaving can be applied - in general - only to ADU frames. Thus, interleaving was not possible for RFC 2250.)
由于MP3帧中的后向指针,交织通常只能应用于ADU帧。因此,对于RFC 2250,交织是不可能的。)
Step 3 is the packetizing of a sequence of (interleaved) ADU frames into RTP packets - as described in section 3.3 above. Each packet's RTP timestamp is the presentation time of the first ADU that is packed within it. Note that, if interleaving was done in step 2, the RTP timestamps on outgoing packets will not necessarily be monotonically nondecreasing.
步骤3是将一系列(交错)ADU帧打包成RTP数据包,如上文第3.3节所述。每个数据包的RTP时间戳是打包在其中的第一个ADU的显示时间。注意,如果在步骤2中进行了交织,则传出分组上的RTP时间戳不一定是单调的非减损的。
Similarly, a sequence of received RTP packets is handled as follows:
类似地,接收到的RTP分组的序列被如下处理:
RTP packets -4-> RTP packets ordered by RTP sequence number -5-> interleaved ADU frames -6-> ADU frames -7-> MP3 frames
RTP packets -4-> RTP packets ordered by RTP sequence number -5-> interleaved ADU frames -6-> ADU frames -7-> MP3 frames
Step 4 is the usual sorting of incoming RTP packets using the RTP sequence number.
步骤4是使用RTP序列号对传入RTP数据包进行常规排序。
Step 5 is the depacketizing of ADU frames from RTP packets - i.e., the reverse of step 3. As part of this process, a receiver uses the "C" (continuation) flag in the ADU descriptor to notice when an ADU frame is split over more than one packet (and to discard the ADU frame entirely if one of these packets is lost).
步骤5是从RTP数据包中取出ADU帧的分组-即,与步骤3相反。作为该过程的一部分,接收机使用ADU描述符中的“C”(continuation)标志来通知何时在多个分组上分割ADU帧(并且如果其中一个分组丢失,则完全丢弃ADU帧)。
Step 6 is the rearranging of the sequence of ADU frames back to its original order (except for ADU frames missing due to packet loss), as described in section 6 below. (Note also the pseudo-code in appendix B.2.)
步骤6是将ADU帧序列重新排列回其原始顺序(由于分组丢失而丢失的ADU帧除外),如下文第6节所述。(另请注意附录B.2中的伪代码。)
Step 7 is the conversion of the sequence of ADU frames into a corresponding sequence of MP3 frames - i.e., the reverse of step 1. (Note also the pseudo-code in appendix A.2.) With an appropriately modified MP3 decoder, an implementation may omit this step; instead, it could feed ADU frames directly to the (modified) MP3 decoder.
步骤7是将ADU帧序列转换为相应的MP3帧序列-即,与步骤1相反。(另请注意附录A.2中的伪代码。)对于经过适当修改的MP3解码器,实现可以省略该步骤;相反,它可以将ADU帧直接送入(修改过的)MP3解码器。
In MPEG audio frames (MPEG 1 or 2; all layers) the high-order 11 bits of the 4-byte MPEG header ('syncword') are always all-one (i.e., 0xFFE). When reordering a sequence of ADU frames for transmission, we reuse these 11 bits as an "Interleaving Sequence Number" (ISN). (Upon reception, they are replaced with 0xFFE once again.)
在MPEG音频帧(MPEG 1或2;所有层)中,4字节MPEG报头(“syncword”)的高阶11位始终都是一个(即0xFFE)。当重新排列ADU帧序列以进行传输时,我们将这11位重新用作“交织序列号”(ISN)。(接收后,它们将再次替换为0xFFE。)
The structure of the ISN is (a,b), where:
ISN的结构为(a,b),其中:
- a == bits 0-7: 8-bit Interleave Index (within Cycle) - b == bits 8-10: 3-bit Interleave Cycle Count
- a==位0-7:8位交织索引(周期内)-b==位8-10:3位交织周期计数
I.e., the 4-byte MPEG header is reused as follows:
即,4字节MPEG报头的重用如下:
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Interleave Idx |CycCt| The rest of the original MPEG header | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Interleave Idx |CycCt| The rest of the original MPEG header | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Example: Consider the following interleave cycle (of size 8): 1,3,5,7,0,2,4,6 (This particular pattern has the property that any loss of up to four consecutive ADUs in the interleaved stream will lead to a deinterleaved stream with no gaps greater than one [7].) This produces the following sequence of ISNs:
示例:考虑以下交织周期(大小8):1,3,5,7,0,2,4,6(该特定模式具有在交织流中多达四个连续ADU的任何损失将导致无交织的流,没有大于1(7)的间隙)。这产生了以下ISNs序列:
(1,0) (3,0) (5,0) (7,0) (0,0) (2,0) (4,0) (6,0) (1,1) (3,1) (5,1) etc.
(1,0)(3,0)(5,0)(7,0)(0,0)(2,0)(4,0)(6,0)(1,1)(3,1)(5,1)等。
So, in this example, a sequence of ADU frames
因此,在本例中,一系列ADU帧
f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 (etc.)
f0 f1 f2 f3 f4 f5 f6 f7 f8 f9(等)
would get reordered, in step 2, into:
将在步骤2中重新排序为:
(1,0)f1 (3,0)f3 (5,0)f5 (7,0)f7 (0,0)f0 (2,0)f2 (4,0)f4 (6,0)f6 (1,1)f9 (3,1)f11 (5,1)f13 (etc.)
(1,0)f1(3,0)f3(5,0)f5(7,0)f7(0,0)f0(2,0)f2(4,0)f4(6,0)f6(1,1)f9(3,1)f11(5,1)f13(等)
and the reverse reordering (along with replacement of the 0xFFE) would occur upon reception.
接收时会发生反向重新排序(以及0xFFE的更换)。
The reason for breaking the ISN into "Interleave Cycle Count" and "Interleave Index" (rather than just treating it as a single 11-bit counter) is to give receivers a way of knowing when an ADU frame should be 'released' to the ADU->MP3 conversion process (step 7 above), rather than waiting for more interleaved ADU frames to arrive. E.g., in the example above, when the receiver sees a frame with ISN (<something>,1), it knows that it can release all previously-seen frames with ISN (<something>,0), even if some other (<something>,0) frames remain missing due to packet loss. A 8-bit Interleave Index allows interleave cycles of size up to 256.
将ISN分为“交织周期计数”和“交织索引”(而不仅仅将其视为一个11位计数器)的原因是为了让接收机知道何时应将ADU帧“释放”到ADU->MP3转换过程(上面的步骤7),而不是等待更多交织的ADU帧到达。例如,在上面的示例中,当接收机看到具有ISN(<something>,1)的帧时,它知道它可以释放所有先前看到的具有ISN(<something>,0)的帧,即使一些其他(<something>,0)帧由于分组丢失而仍然丢失。8位交织索引允许最大256个交织周期。
The choice of an interleaving order can be made independently of RTP packetization. Thus, a simple implementation could choose an interleaving order first, reorder the ADU frames accordingly (step 2), then simply pack them sequentially into RTP packets (step 3). However, the size of ADU frames - and thus the number of ADU frames that will fit in each RTP packet - will typically vary in size, so a more optimal implementation would combine steps 2 and 3, by choosing an interleaving order that better reflected the number of ADU frames packed within each RTP packet.
交错顺序的选择可以独立于RTP打包。因此,一个简单的实现可以首先选择交织顺序,相应地对ADU帧重新排序(步骤2),然后简单地将它们顺序打包成RTP分组(步骤3)。然而,ADU帧的大小——以及适合每个RTP分组的ADU帧的数量——通常在大小上会有所不同,因此更优化的实现将结合步骤2和3,方法是选择更好地反映每个RTP分组中打包的ADU帧数量的交织顺序。
Each receiving implementation of this payload format MUST recognize the ISN and be able to perform deinterleaving of incoming ADU frames (step 6). However, a sending implementation of this payload format MAY choose not to perform interleaving - i.e., by omitting step 2. In this case, the high-order 11 bits in each 4-byte MPEG header would remain at 0xFFE. Receiving implementations would thus see a sequence of identical ISNs (all 0xFFE). They would handle this in the same way as if the Interleave Cycle Count changed with each ADU frame, by simply releasing the sequence of incoming ADU frames sequentially to the ADU->MP3 conversion process (step 7), without reordering. (Note also the pseudo-code in appendix B.2.)
此有效负载格式的每个接收实现必须识别ISN,并能够执行传入ADU帧的解交织(步骤6)。然而,该有效负载格式的发送实现可以选择不执行交织-即,通过省略步骤2。在这种情况下,每个4字节MPEG报头中的高阶11位将保持为0xFFE。因此,接收实现将看到一系列相同的iSN(都是0xFFE)。他们将以与交织周期计数随每个ADU帧而改变相同的方式来处理此问题,只需将传入ADU帧的序列顺序释放到ADU->MP3转换过程(步骤7),而无需重新排序。(另请注意附录B.2中的伪代码。)
MIME media type name: audio
MIME媒体类型名称:音频
MIME subtype: mpa-robust
MIME子类型:mpa
Required parameters: none
所需参数:无
Optional parameters: none
可选参数:无
Encoding considerations: This type is defined only for transfer via RTP as specified in "RFC 3119".
编码注意事项:此类型仅定义为“RFC 3119”中指定的通过RTP传输。
Security considerations: See the "Security Considerations" section of "RFC 3119".
安全注意事项:请参阅“RFC 3119”的“安全注意事项”部分。
Interoperability considerations: This encoding is incompatible with both the "audio/mpa" and "audio/mpeg" mime types.
互操作性注意事项:此编码与“音频/mpa”和“音频/mpeg”mime类型都不兼容。
Published specification: The ISO/IEC MPEG-1 [3] and MPEG-2 [4] audio specifications, and "RFC 3119".
已发布规范:ISO/IEC MPEG-1[3]和MPEG-2[4]音频规范,以及“RFC 3119”。
Applications which use this media type: Audio streaming tools (transmitting and receiving)
使用此媒体类型的应用程序:音频流工具(发送和接收)
Additional information: none
其他信息:无
Person & email address to contact for further information: Ross Finlayson finlayson@live.com
联系人和电子邮件地址,以获取更多信息:Ross Finlaysonfinlayson@live.com
Intended usage: COMMON
预期用途:普通
Author/Change controller: Author: Ross Finlayson Change controller: IETF AVT Working Group
作者/变更负责人:作者:Ross Finlayson变更负责人:IETF AVT工作组
When conveying information by SDP [8], the encoding name SHALL be "mp3" (the same as the MIME subtype). An example of the media representation in SDP is:
当通过SDP[8]传输信息时,编码名称应为“mp3”(与MIME子类型相同)。SDP中媒体表示的一个示例是:
m=audio 49000 RTP/AVP 121 a=rtpmap:121 mpa-robust/90000
m=audio 49000 RTP/AVP 121 a=rtpmap:121 mpa-robust/90000
If a session using this payload format is being encrypted, and interleaving is being used, then the sender SHOULD ensure that any change of encryption key coincides with a start of a new interleave cycle. Apart from this, the security considerations for this payload format are identical to those noted for RFC 2250 [2].
如果正在加密使用此有效负载格式的会话,并且正在使用交织,则发送方应确保加密密钥的任何更改与新交织周期的开始一致。除此之外,此有效负载格式的安全注意事项与RFC 2250[2]中的注意事项相同。
The suggestion of adding an interleaving option (using the first bits of the MPEG 'syncword' - which would otherwise be all-ones - as an interleaving index) is due to Dave Singer and Stefan Gewinner. In addition, Dave Singer provided valuable feedback that helped clarify and improve the description of this payload format. Feedback from Chris Sloan led to the addition of an "ADU descriptor" preceding each ADU frame in the RTP packet.
Dave Singer和Stefan Gewinner建议添加一个交织选项(使用MPEG“syncword”的前几位——否则将是全1——作为交织索引)。此外,Dave Singer提供了有价值的反馈,有助于澄清和改进此有效负载格式的描述。Chris Sloan的反馈导致在RTP数据包的每个ADU帧之前添加了一个“ADU描述符”。
[1] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997.
[1] Bradner,S.,“RFC中用于表示需求水平的关键词”,BCP 14,RFC 2119,1997年3月。
[2] Hoffman, D., Fernando, G., Goyal, V. and M. Civanlar, "RTP Payload Format for MPEG1/MPEG2 Video", RFC 2250, January 1998.
[2] Hoffman,D.,Fernando,G.,Goyal,V.和M.Civanlar,“MPEG1/MPEG2视频的RTP有效载荷格式”,RFC 2250,1998年1月。
[3] ISO/IEC International Standard 11172-3; "Coding of moving pictures and associated audio for digital storage media up to about 1,5 Mbits/s - Part 3: Audio", 1993.
[3] ISO/IEC国际标准11172-3;“小于等于1.5Mbit/s的数字存储媒体用运动图像和相关音频的编码.第3部分:音频”,1993年。
[4] ISO/IEC International Standard 13818-3; "Generic coding of moving pictures and associated audio information - Part 3: Audio", 1998.
[4] ISO/IEC国际标准13818-3;“运动图像和相关音频信息的通用编码.第3部分:音频”,1998年。
[5] Handley, M., "Guidelines for Writers of RTP Payload Format Specifications", BCP 36, RFC 2736, December 1999.
[5] 汉德利,M.,“RTP有效载荷格式规范编写者指南”,BCP 36,RFC 2736,1999年12月。
[6] Schulzrinne, H., "RTP Profile for Audio and Video Conferences with Minimal Control", RFC 1890, January 1996.
[6] Schulzrinne,H.,“具有最小控制的音频和视频会议的RTP配置文件”,RFC 1890,1996年1月。
[7] Marshall Eubanks, personal communication, December 2000.
[7] Marshall Eubanks,《个人通讯》,2000年12月。
[8] Handley, M. and V. Jacobson, "SDP: Session Description Protocol", RFC 2327, April 1998.
[8] Handley,M.和V.Jacobson,“SDP:会话描述协议”,RFC 2327,1998年4月。
Ross Finlayson, Live Networks, Inc. (LIVE.COM)
罗斯·芬莱森,直播网络公司(Live.COM)
EMail: finlayson@live.com WWW: http://www.live.com/
EMail: finlayson@live.com WWW: http://www.live.com/
Appendix A. Translating Between "MP3 Frames" and "ADU Frames"
附录A“MP3框架”和“ADU框架”之间的翻译
The following 'pseudo code' describes how a sender using this payload format can translate a sequence of regular "MP3 Frames" to "ADU Frames", and how a receiver can perform the reverse translation: from "ADU Frames" to "MP3 Frames".
以下“伪代码”描述了使用此有效负载格式的发送方如何将一系列常规“MP3帧”转换为“ADU帧”,以及接收方如何执行反向转换:从“ADU帧”转换为“MP3帧”。
We first define the following abstract data structures:
我们首先定义以下抽象数据结构:
- "Segment": A record that represents either a "MP3 Frame" or an "ADU Frame". It consists of the following fields: - "header": the 4-byte MPEG header - "headerSize": a constant (== 4) - "sideInfo": the 'side info' structure, *including* the optional 2-byte CRC field, if present - "sideInfoSize": the size (in bytes) of the above structure - "frameData": the remaining data in this frame - "frameDataSize": the size (in bytes) of the above data - "backpointer": the size (in bytes) of the backpointer for this frame - "aduDataSize": the size (in bytes) of the ADU associated with this frame. (If the frame is already an "ADU Frame", then aduDataSize == frameDataSize) - "mp3FrameSize": the total size (in bytes) that this frame would have if it were a regular "MP3 Frame". (If it is already a "MP3 Frame", then mp3FrameSize == headerSize + sideInfoSize + frameDataSize) Note that this size can be derived completely from "header".
- “段”:表示“MP3帧”或“ADU帧”的记录。它由以下字段组成:-“header”:4字节MPEG标头-“headerSize”:一个常数(=4)-“sideInfo”:侧信息结构,*包括*可选的2字节CRC字段,如果存在-“sideInfoSize”:上述结构的大小(以字节为单位)“frameData”:此帧中的剩余数据-“frameDataSize”:大小(以字节为单位)在上述数据中—“backpointer”:此帧的backpointer的大小(以字节为单位):“aduDataSize”:与此帧关联的ADU的大小(以字节为单位)。(如果帧已经是“ADU帧”,则ADUDASize==frameDataSize)-“mp3FrameSize”:如果该帧是常规“MP3帧”,则该帧将具有的总大小(以字节为单位)。(如果它已经是一个“MP3帧”,那么mp3FrameSize==headerSize+sideInfoSize+frameDataSize)请注意,此大小可以完全从“header”派生。
- "SegmentQueue": A FIFO queue of "Segment"s, with operations - void enqueue(Segment) - Segment dequeue() - Boolean isEmpty() - Segment head() - Segment tail() - Segment previous(Segment): returns the segment prior to a given one - Segment next(Segment): returns the segment after a given one - unsigned totalDataSize(): returns the sum of the "frameDataSize" fields of each entry in the queue
- "SegmentQueue": A FIFO queue of "Segment"s, with operations - void enqueue(Segment) - Segment dequeue() - Boolean isEmpty() - Segment head() - Segment tail() - Segment previous(Segment): returns the segment prior to a given one - Segment next(Segment): returns the segment after a given one - unsigned totalDataSize(): returns the sum of the "frameDataSize" fields of each entry in the queue
A.1 Converting a sequence of "MP3 Frames" to a sequence of "ADU Frames":
A.1将“MP3帧”序列转换为“ADU帧”序列:
SegmentQueue pendingMP3Frames; // initially empty while (1) { // Enqueue new MP3 Frames, until we have enough data to generate // the ADU for a frame: do { int totalDataSizeBefore = pendingMP3Frames.totalDataSize();
SegmentQueue pendingMP3Frames; // initially empty while (1) { // Enqueue new MP3 Frames, until we have enough data to generate // the ADU for a frame: do { int totalDataSizeBefore = pendingMP3Frames.totalDataSize();
Segment newFrame = 'the next MP3 Frame'; pendingMP3Frames.enqueue(newFrame);
Segment newFrame = 'the next MP3 Frame'; pendingMP3Frames.enqueue(newFrame);
int totalDataSizeAfter = pendingMP3Frames.totalDataSize(); } while (totalDataSizeBefore < newFrame.backpointer || totalDataSizeAfter < newFrame.aduDataSize);
int totalDataSizeAfter = pendingMP3Frames.totalDataSize(); } while (totalDataSizeBefore < newFrame.backpointer || totalDataSizeAfter < newFrame.aduDataSize);
// We now have enough data to generate the ADU for the most // recently enqueued frame (i.e., the tail of the queue). // (The earlier frames in the queue - if any - must be // discarded, as we don't have enough data to generate // their ADUs.) Segment tailFrame = pendingMP3Frames.tail();
// We now have enough data to generate the ADU for the most // recently enqueued frame (i.e., the tail of the queue). // (The earlier frames in the queue - if any - must be // discarded, as we don't have enough data to generate // their ADUs.) Segment tailFrame = pendingMP3Frames.tail();
// Output the header and side info: output(tailFrame.header); output(tailFrame.sideInfo);
// Output the header and side info: output(tailFrame.header); output(tailFrame.sideInfo);
// Go back to the frame that contains the start of our ADU data: int offset = 0; Segment curFrame = tailFrame; int prevBytes = tailFrame.backpointer; while (prevBytes > 0) { curFrame = pendingMP3Frames.previous(curFrame); int dataHere = curFrame.frameDataSize; if (dataHere < prevBytes) { prevBytes -= dataHere; } else { offset = dataHere - prevBytes; break; } }
// Go back to the frame that contains the start of our ADU data: int offset = 0; Segment curFrame = tailFrame; int prevBytes = tailFrame.backpointer; while (prevBytes > 0) { curFrame = pendingMP3Frames.previous(curFrame); int dataHere = curFrame.frameDataSize; if (dataHere < prevBytes) { prevBytes -= dataHere; } else { offset = dataHere - prevBytes; break; } }
// Dequeue any frames that we no longer need: while (pendingMP3Frames.head() != curFrame) { pendingMP3Frames.dequeue(); }
// Dequeue any frames that we no longer need: while (pendingMP3Frames.head() != curFrame) { pendingMP3Frames.dequeue(); }
// Output, from the remaining frames, the ADU data that we want: int bytesToUse = tailFrame.aduDataSize; while (bytesToUse > 0) { int dataHere = curFrame.frameDataSize - offset; int bytesUsedHere = dataHere < bytesToUse ? dataHere : bytesToUse;
// Output, from the remaining frames, the ADU data that we want: int bytesToUse = tailFrame.aduDataSize; while (bytesToUse > 0) { int dataHere = curFrame.frameDataSize - offset; int bytesUsedHere = dataHere < bytesToUse ? dataHere : bytesToUse;
output("bytesUsedHere" bytes from curFrame.frameData, starting from "offset");
输出curFrame.frameData中的“bytesUsedHere”字节,从“offset”开始;
bytesToUse -= bytesUsedHere; offset = 0; curFrame = pendingMP3Frames.next(curFrame); } }
bytesToUse -= bytesUsedHere; offset = 0; curFrame = pendingMP3Frames.next(curFrame); } }
A.2 Converting a sequence of "ADU Frames" to a sequence of "MP3 Frames":
A.2将“ADU帧”序列转换为“MP3帧”序列:
SegmentQueue pendingADUFrames; // initially empty while (1) { while (needToGetAnADU()) { Segment newADU = 'the next ADU Frame'; pendingADUFrames.enqueue(newADU);
SegmentQueue pendingADUFrames; // initially empty while (1) { while (needToGetAnADU()) { Segment newADU = 'the next ADU Frame'; pendingADUFrames.enqueue(newADU);
insertDummyADUsIfNecessary(); }
insertDummyADUsIfNecessary(); }
generateFrameFromHeadADU(); }
generateFrameFromHeadADU(); }
Boolean needToGetAnADU() { // Checks whether we need to enqueue one or more new ADUs before // we have enough data to generate a frame for the head ADU. Boolean needToEnqueue = True;
Boolean needToGetAnADU() { // Checks whether we need to enqueue one or more new ADUs before // we have enough data to generate a frame for the head ADU. Boolean needToEnqueue = True;
if (!pendingADUFrames.isEmpty()) { Segment curADU = pendingADUFrames.head(); int endOfHeadFrame = curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; int frameOffset = 0;
if (!pendingADUFrames.isEmpty()) { Segment curADU = pendingADUFrames.head(); int endOfHeadFrame = curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; int frameOffset = 0;
while (1) { int endOfData = frameOffset - curADU.backpointer + curADU.aduDataSize; if (endOfData >= endOfHeadFrame) { // We have enough data to generate a // frame.
while (1) { int endOfData = frameOffset - curADU.backpointer + curADU.aduDataSize; if (endOfData >= endOfHeadFrame) { // We have enough data to generate a // frame.
needToEnqueue = False; break; }
needToEnqueue = False; break; }
frameOffset += curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; if (curADU == pendingADUFrames.tail()) break; curADU = pendingADUFrames.next(curADU); } }
frameOffset += curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; if (curADU == pendingADUFrames.tail()) break; curADU = pendingADUFrames.next(curADU); } }
return needToEnqueue; }
return needToEnqueue; }
void generateFrameFromHeadADU() { Segment curADU = pendingADUFrames.head();
void generateFrameFromHeadADU() { Segment curADU = pendingADUFrames.head();
// Output the header and side info: output(curADU.header); output(curADU.sideInfo);
// Output the header and side info: output(curADU.header); output(curADU.sideInfo);
// Begin by zeroing out the rest of the frame, in case the ADU // data doesn't fill it in completely: int endOfHeadFrame = curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; output("endOfHeadFrame" zero bytes);
// Begin by zeroing out the rest of the frame, in case the ADU // data doesn't fill it in completely: int endOfHeadFrame = curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; output("endOfHeadFrame" zero bytes);
// Fill in the frame with appropriate ADU data from this and // subsequent ADUs: int frameOffset = 0; int toOffset = 0;
// Fill in the frame with appropriate ADU data from this and // subsequent ADUs: int frameOffset = 0; int toOffset = 0;
while (toOffset < endOfHeadFrame) { int startOfData = frameOffset - curADU.backpointer; if (startOfData > endOfHeadFrame) { break; // no more ADUs are needed } int endOfData = startOfData + curADU.aduDataSize; if (endOfData > endOfHeadFrame) { endOfData = endOfHeadFrame; }
while (toOffset < endOfHeadFrame) { int startOfData = frameOffset - curADU.backpointer; if (startOfData > endOfHeadFrame) { break; // no more ADUs are needed } int endOfData = startOfData + curADU.aduDataSize; if (endOfData > endOfHeadFrame) { endOfData = endOfHeadFrame; }
int fromOffset; if (startOfData <= toOffset) { fromOffset = toOffset - startOfData; startOfData = toOffset; if (endOfData < startOfData) {
int fromOffset; if (startOfData <= toOffset) { fromOffset = toOffset - startOfData; startOfData = toOffset; if (endOfData < startOfData) {
endOfData = startOfData; } } else { fromOffset = 0;
endOfData = startOfData; } } else { fromOffset = 0;
// leave some zero bytes beforehand: toOffset = startOfData; }
// leave some zero bytes beforehand: toOffset = startOfData; }
int bytesUsedHere = endOfData - startOfData; output(starting at offset "toOffset, "bytesUsedHere" bytes from "&curADU.frameData[fromOffset]"); toOffset += bytesUsedHere;
int bytesUsedHere = endOfData - startOfData; output(starting at offset "toOffset, "bytesUsedHere" bytes from "&curADU.frameData[fromOffset]"); toOffset += bytesUsedHere;
frameOffset += curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; curADU = pendingADUFrames.next(curADU); }
frameOffset += curADU.mp3FrameSize - curADU.headerSize - curADU.sideInfoSize; curADU = pendingADUFrames.next(curADU); }
pendingADUFrames.dequeue(); }
pendingADUFrames.dequeue(); }
void insertDummyADUsIfNecessary() { // The tail segment (ADU) is assumed to have been recently // enqueued. If its backpointer would overlap the data // of the previous ADU, then we need to insert one or more // empty, 'dummy' ADUs ahead of it. (This situation should // occur only if an intermediate ADU was missing - e.g., due // to packet loss.) while (1) { Segment tailADU = pendingADUFrames.tail(); int prevADUend; // relative to the start of the tail ADU
void insertDummyADUsIfNecessary() { // The tail segment (ADU) is assumed to have been recently // enqueued. If its backpointer would overlap the data // of the previous ADU, then we need to insert one or more // empty, 'dummy' ADUs ahead of it. (This situation should // occur only if an intermediate ADU was missing - e.g., due // to packet loss.) while (1) { Segment tailADU = pendingADUFrames.tail(); int prevADUend; // relative to the start of the tail ADU
if (pendingADUFrames.head() != tailADU) { // there is a previous ADU Segment prevADU = pendingADUFrames.previous(tailADU); prevADUend = prevADU.mp3FrameSize + prevADU.backpointer - prevADU.headerSize - curADU.sideInfoSize; if (prevADU.aduDataSize > prevADUend) { // this shouldn't happen if the previous // ADU was well-formed prevADUend = 0; } else { prevADUend -= prevADU.aduDataSize;
if (pendingADUFrames.head() != tailADU) { // there is a previous ADU Segment prevADU = pendingADUFrames.previous(tailADU); prevADUend = prevADU.mp3FrameSize + prevADU.backpointer - prevADU.headerSize - curADU.sideInfoSize; if (prevADU.aduDataSize > prevADUend) { // this shouldn't happen if the previous // ADU was well-formed prevADUend = 0; } else { prevADUend -= prevADU.aduDataSize;
} } else { prevADUend = 0; }
} } else { prevADUend = 0; }
if (tailADU.backpointer > prevADUend) { // Insert a 'dummy' ADU in front of the tail. // This ADU can have the same "header" (and thus // "mp3FrameSize") as the tail ADU, but should // have an "aduDataSize" of zero. The simplest // way to do this is to copy the "sideInfo" from // the tail ADU, and zero out the // "main_data_begin" and all of the // "part2_3_length" fields. } else { break; // no more dummy ADUs need to be inserted } } }
if (tailADU.backpointer > prevADUend) { // Insert a 'dummy' ADU in front of the tail. // This ADU can have the same "header" (and thus // "mp3FrameSize") as the tail ADU, but should // have an "aduDataSize" of zero. The simplest // way to do this is to copy the "sideInfo" from // the tail ADU, and zero out the // "main_data_begin" and all of the // "part2_3_length" fields. } else { break; // no more dummy ADUs need to be inserted } } }
Appendix B: Interleaving and Deinterleaving
附录B:交织和解交织
The following 'pseudo code' describes how a sender can reorder a sequence of "ADU Frames" according to an interleaving pattern (step 2), and how a receiver can perform the reverse reordering (step 6).
以下“伪代码”描述发送方如何根据交织模式对“ADU帧”序列进行重新排序(步骤2),以及接收方如何执行反向重新排序(步骤6)。
B.1 Interleaving a sequence of "ADU Frames":
B.1交错“ADU帧”序列:
We first define the following abstract data structures:
我们首先定义以下抽象数据结构:
- "interleaveCycleSize": an integer in the range [1,256] - "interleaveCycle": an array, of size "interleaveCycleSize", containing some permutation of the integers from the set [0 .. interleaveCycleSize-1] e.g., if "interleaveCycleSize" == 8, "interleaveCycle" might contain: 1,3,5,7,0,2,4,6 - "inverseInterleaveCycle": an array containing the inverse of the permutation in "interleaveCycle" - i.e., such that interleaveCycle[inverseInterleaveCycle[i]] == i - "ii": the current Interleave Index (initially 0) - "icc": the current Interleave Cycle Count (initially 0) - "aduFrameBuffer": an array, of size "interleaveCycleSize", of ADU Frames that are awaiting packetization
- “interleaveCycleSize”:范围为[1256]-“interleaveCycleSize”的整数:大小为“interleaveCycleSize”的数组,包含集合[0..interleaveCycleSize-1]中的一些整数排列,例如,如果“interleaveCycleSize”==8,“interleaveCycle”可能包含:1,3,5,7,0,2,4,6-“inverseInterleaveCycle”:一个数组,包含“交织循环”中排列的倒数,即交织循环[inverseInterleaveCycle[i]==i-“ii”:当前交织索引(最初为0)-“icc”:当前交织循环计数(最初为0)-“aduFrameBuffer”:一个大小为“interleaveCycleSize”的数组,等待打包的ADU帧的数量
while (1) { int positionOfNextFrame = inverseInterleaveCycle[ii]; aduFrameBuffer[positionOfNextFrame] = the next ADU frame; replace the high-order 11 bits of this frame's MPEG header
while (1) { int positionOfNextFrame = inverseInterleaveCycle[ii]; aduFrameBuffer[positionOfNextFrame] = the next ADU frame; replace the high-order 11 bits of this frame's MPEG header
with (ii,icc); // Note: Be sure to leave the remaining 21 bits as is if (++ii == interleaveCycleSize) { // We've finished this cycle, so pass all // pending frames to the packetizing step for (int i = 0; i < interleaveCycleSize; ++i) { pass aduFrameBuffer[i] to the packetizing step; }
with (ii,icc); // Note: Be sure to leave the remaining 21 bits as is if (++ii == interleaveCycleSize) { // We've finished this cycle, so pass all // pending frames to the packetizing step for (int i = 0; i < interleaveCycleSize; ++i) { pass aduFrameBuffer[i] to the packetizing step; }
ii = 0; icc = (icc+1)%8; } }
ii = 0; icc = (icc+1)%8; } }
B.2 Deinterleaving a sequence of (interleaved) "ADU Frames":
B.2(交织)“ADU帧”序列的解交织:
We first define the following abstract data structures:
我们首先定义以下抽象数据结构:
- "ii": the Interleave Index from the current incoming ADU frame - "icc": the Interleave Cycle Count from the current incoming ADU frame - "iiLastSeen": the most recently seen Interleave Index (initially, some integer *not* in the range [0,255]) - "iccLastSeen": the most recently seen Interleave Cycle Count (initially, some integer *not* in the range [0,7]) - "aduFrameBuffer": an array, of size 32, of (pointers to) ADU Frames that have just been depacketized (initially, all entries are NULL)
- “ii”:来自当前传入ADU帧的交织索引-“icc”:来自当前传入ADU帧的交织周期计数-“iiLastSeen”:最近看到的交织索引(最初,某个整数*不在范围[0255]内])-“icc”:最近看到的交织周期计数(最初,范围[0,7]中的某个整数*不*)-“aduFrameBuffer”:一个大小为32的数组,包含(指向)刚刚被解包的ADU帧(最初,所有条目均为空)
while (1) { aduFrame = the next ADU frame from the depacketizing step; (ii,icc) = "the high-order 11 bits of aduFrame's MPEG header"; "the high-order 11 bits of aduFrame's MPEG header" = 0xFFE; // Note: Be sure to leave the remaining 21 bits as is
while (1) { aduFrame = the next ADU frame from the depacketizing step; (ii,icc) = "the high-order 11 bits of aduFrame's MPEG header"; "the high-order 11 bits of aduFrame's MPEG header" = 0xFFE; // Note: Be sure to leave the remaining 21 bits as is
if (icc != iccLastSeen || ii == iiLastSeen) { // We've started a new interleave cycle // (or interleaving was not used). Release all // pending ADU frames to the ADU->MP3 conversion step: for (int i = 0; i < 32; ++i) { if (aduFrameBuffer[i] != NULL) { release aduFrameBuffer[i]; aduFrameBuffer[i] = NULL; } } }
if (icc != iccLastSeen || ii == iiLastSeen) { // We've started a new interleave cycle // (or interleaving was not used). Release all // pending ADU frames to the ADU->MP3 conversion step: for (int i = 0; i < 32; ++i) { if (aduFrameBuffer[i] != NULL) { release aduFrameBuffer[i]; aduFrameBuffer[i] = NULL; } } }
iiLastSeen = ii;
iiLastSeen=ii;
iccLastSeen = icc; aduFrameBuffer[ii] = aduFrame; }
iccLastSeen = icc; aduFrameBuffer[ii] = aduFrame; }
Full Copyright Statement
完整版权声明
Copyright (C) The Internet Society (2001). All Rights Reserved.
版权所有(C)互联网协会(2001年)。版权所有。
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.translate error, please retry
The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.
上述授予的有限许可是永久性的,互联网协会或其继承人或受让人不会撤销。
This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
本文件和其中包含的信息是按“原样”提供的,互联网协会和互联网工程任务组否认所有明示或暗示的保证,包括但不限于任何保证,即使用本文中的信息不会侵犯任何权利,或对适销性或特定用途适用性的任何默示保证。
Acknowledgement
确认
Funding for the RFC Editor function is currently provided by the Internet Society.
RFC编辑功能的资金目前由互联网协会提供。