Vous êtes sur la page 1sur 7

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts

for publication in the ICC 2008 proceedings.

Rapid Channel Zapping for IPTV Broadcasting with Additional Multicast Stream
Chikara SASAKI Atsushi TAGAMI Teruyuki HASEGAWA Shigehiro ANO KDDI R&D Laboratories 2-1-15 Ohara, Fujimino-shi, Saitama 356-8502, Japan
AbstractThis paper proposes a novel approach for improving channel zapping delay on IPTV broadcasting services. Channel zapping delay is a crucial issue for digital TV broadcasting, which entails audio/video data buffering before reproduction. Such delay could be mitigated in an IP environment if a receiver accelerated this buffering using additional burst transmissions from dedicated servers when a channel zap occurs. However, most conventional solutions are based on the unicast burst, which may cause an impulsive server/network load because channel zaps tend to happen simultaneously when programs are finished or suspended by commercial messages. To reduce this load, we propose a new multicast-based solution that takes into consideration the timing variation in channel zap on each receiver. We have confirmed a maximum 1-second reduction in the zapping delay using commercial multicast streams. Keywords-component; IPTV; multicast; buffering acceleration Rapid channel Zapping;

2. Decoding Forward Error Correction codes[5][6] applied for resilient distribution, 3. Absorbing network jitter, etc.

(FEC)

References [7] and [8] present a scheme for realizing rapid channel zapping, which reduces this buffering time by unicast burst transmission from dedicated zapping servers. However, the zapping servers must respond individually to each channel zap request generated from viewers due to the unicast mechanism. If several viewers simultaneously request channel zaps, a heavy load will be imposed not only on the zapping servers but also on the distribution network in proportion to the number of viewers. Since viewers are apt to change channels at similar times, such as the beginning of commercials or at program termination, we consider this problem to be a serious drawback from the viewpoint of service design and operation, where it is preferable to handle well-provisioned network traffic and server transaction volumes. In order to prevent such an impulsive load, we present a novel rapid channel zapping scheme with an additional multicast stream instead of unicast burst. In addition to the original stream, an STB also receives an additional multicast stream, which is simply a copied and constantly delayed version of the original stream, resulting in the buffering time being halved. Furthermore, we extend our multicast-based scheme to allow application of an additional multicast stream with an r (r: positive integer) multiple of the transmission rate of the original stream to realize more rapid channel zapping. Since the timing difference of the channel zap on each STB may cause redundant/invalid packet receptions, we have developed a packet ordering rule for the additional stream, from which any STB can obtain nearly appropriate packets for buffering acceleration anytime the STB starts zapping. We also verified the effect of the proposed scheme by applying our prototype system to a commercial IPTV broadcasting service [4] and confirmed a maximum 1-second reduction in the zapping delay. The remainder of the paper is organized as follows. Section II describes the IPTV broadcasting service and channel zapping delay. Section III describes the proposed scheme for rapid channel zapping, adopting an add-on multicast stream, and its r multiplied rate extension, especially on the packet ordering rule. Section IV shows the evaluation results obtained by the proposed prototype system using commercial multicast streams. Sections V and VI provide discussions and concluding remarks, respectively.

I. INTRODUCTION Currently, as a result of the significant progress in digitization, one of the biggest innovations is the ability to transport any type of data over an IP network at low cost. Various services have been provided based on the IP platform. As such, the demand for audio/video distribution services, as typified by Internet Protocol TeleVision (IPTV) [1]-[4], has increased, including Video On Demand (VOD) and TV broadcasting services. In VOD, viewers can start a program from its beginning at anytime. IPTV broadcasting, however, requires all viewers to watch a program simultaneously according to a predetermined schedule. To efficiently distribute a TV program to large number of viewers, IP multicast is a promising solution because it avoids sending the same data to the network multiple times by replicating packets at branching routers only if viewers are present at the downstream segments. In this paper, we focus on IPTV broadcasting with IP multicast. Unlike analog TV broadcasting, channel zapping delay should occur before starting program reproduction on the new channel in digital TV broadcasting due to the necessity of audio/video data buffering at TV receivers. For example, in IPTV, such time-consuming buffering is inevitable in receivers, i.e., the Set Top Box (STB), for the following processes: 1. Decoding audio/video content compressed/encoded digitally,

978-1-4244-2075-9/08/$25.00 2008 IEEE

1760

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Fig. 1. IPTV broadcasting overview.

Fig. 3. Conventional rapid channel zapping framework (unicast-based).

Fig. 2. Channel zapping delay composition.

II.

IPTV BROADCASTING AND CHANNEL ZAPPING DELAY


Fig. 4. Proposed framework (multicast-based).

A. IPTV broadcasting In this paper, we define IPTV broadcasting as an audio/video distribution service over an IP multicast network, rather than over terrestrial/satellite radio waves or cable. IPTV broadcasting broadcasts TV programs simultaneously through multicast transmission paths (i.e., TV channels) in accordance with the TV program scedule. A different multicast address is assigned to each TV channel to associate a channel with a certain multicast group. Figure 1 shows an overview of IPTV broadcasting. The server sends audio/video data compressed in the MPEG2 [9] or H.264 [10] format with an assigned multicast group. Generally, FEC encoding is applied before sending to compensate for unexpected packet losses on the network. A viewer selects a TV channel via the STB: receiving and decoding apparatus. The STB sends an IGMP [11] message to join multicast G corresponding to the channel. The STB then receives multicast stream G and decodes its FEC and compression. Finally, the STB transmits audio/video signals to presentation devices. Audio/video data buffering is required for STB to execute FEC and compression decoding as well as to absorb network jitter. This causes delay of a few seconds after a channel zap is requested. Figure 2 illustrates the composition of zapping delay, where the leave/join delay is several tens of ms, the buffering delay is from several hundred ms to a few seconds, and the decoding delay is from several hundred ms to a few seconds. The goal of this study is to reduce the buffering delay in order to realize rapid channel zapping. B. Conventional scheme for rapid channel zapping The conventional scheme described in [7] and [8] is to reduce the buffering time and first packet arrival time by unicast burst transmission at the beginning of channel zap. The additional transmission starts from the previous I-frame in the original multicast stream, which is the basis of video compression. As

a result, unnecessary transmission prior to the I-frame, i.e., P/B-frames, is omitted. Figure 3 presents unicast-based framework for rapid channel switching. An STB issues a channel zap request to a zapping server, and the server sends burst data starting from the I-frame to the STB with unicast stream for filling the initial play-out buffer in the STB immediately for quick start to reproduction. In addition, the STB has to join/receive the original multicast stream G to continue the reproduction after using up the burst data. Since the zapping server responds to each STB using individual unicast streams, the optimal burst will be generated corresponding to its zap timing. III. PROPOSED SCHEME

A. Problem statement Although the unicast bursts are optimal for individual zap requests, as described above, similar unicast streams may appear if several viewers request channel zaps at the same time. This results in network/server inefficiency and, even worse, unexpected impulsive load. In IPTV broadcasting, it is highly possible for such concurrent zaps, triggered by periodic commercial messages or boundaries of TV programs. This is considered to be a serious problem with respect to the scalability of services, since millions of viewers and various types of high-quality audio/video content consuming a bandwidth of more than 10 Mbps become commonplace. From the viewpoint of capacity design and operation of network and servers, it is desirable that the upper limit of the additional load be predictable and stable, independent of the service growth.

1761

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Fig. 6. Example of r multiplied rate (r = 3, b = 12, silly ordering). Fig. 5. Example of buffering (b = 6, d = 3).

B. Multicast-based approach Instead of the unicast stream, we propose a solution that adds another multicast stream replicated from the original stream with constant delay. In addition to receiving the original multicast stream, an STB obtains adjacent (just prior) data blocks from the additional stream in order to fill the initial buffer rapidly. We assume the following conditions with respect to STB implementation. The STB accumulates b packets upon initial buffering. The STB provides a sort function for packets received in irregular order. Figure 4 shows the framework of the proposed scheme. The outline of its operation is as follows: 1. An accelerator server receives multicast streams G continuously by sending an IGMP join message and stores the data. 2. The accelerator server translates its multicast group to G and sends this multicast stream G with a fixed delay corresponding to the reception time of d packets. 3. An STB joins and starts receiving both multicasts G and G when the viewer requests channel zapping to G. 4. The STB receives and sorts packets from G and G until the initial play-out buffer is filled up. 5. The STB starts regenerating audio/video signals after the completion of initial buffering and leaves multicast G. In this scheme, the accelerator server does nothing but send additional multicast stream G once, no matter how many STBs change the channel at the same time. Duplication of unicast bursts by concurrent zap requests can be avoided because multicast in itself suppresses such redundant packet copies over the entire network. Therefore, the server and network loads do not depend on the viewer number, but rather depend on the total number of IPTV channels for which the upper limit is predictable. Let the initial buffer size in an STB be b. Then, the insertion delay d should be set as follows: d = [b / 2]. (1) where [x] denotes x rounded up to the nearest integer. Figure 5 shows an example of buffering at b = 6. In the case of receiving multicast G only, an STB receives beginning from packet #4, and initial buffering is complete at the reception of packet #9. On the other hand, since the STB obtains packets #1, #2, and #3 from G in the proposed scheme, the buffering

process is complete at the reception of packet #6 from G. As a result, the buffering time is cut in half. Note that the play-out time of audio/video will be synchronized in both cases, whether or not G is applied. C. Extension to r multiplied transmission rate The above scheme is very attractive in terms of simplicity, but the reduction ratio of the buffering time is only 1/2. Thus, we tried to extend the proposed multicast approach to achieve a shorter buffing time by raising the transmission rate of additional multicast G. Let r (r: positive integer) be the ratio of the transmission rate between G and G. Figure 6 depicts a buffering example of the STB for r = 3 and b = 12. An STB starts to receive at time t1 and receives twelve packets from G and G by t2. In this case, the initial buffering is complete at t2 because the STB receives twelve unique and consecutive packets from #1 to #12. However, if an STB starts at t1, duplicate packets (#11, #12) appear among the twelve packets received by t2. Moreover, packet #15 of G overtakes that of G at t3 because the packet ordering rule dictates that packets of G are simply increasing one by one. This is impractical because G is generated from G. To at least double the ratio r, the packet ordering of G must be considered. We have designed the ordering rule based on the following policies.

Policies:
1. The basic processing unit is the set of: - one packet in G, and - the corresponding r packets in G, as shown in Fig. 7. A set of r packet numbers of G should be calculated from only the corresponding packet number of G. The packets picked up from G and G should be unique and consecutive whenever each STB starts to receive. The packet number of G never exceeds that of G.

2. 3. 4.

In policy 1, we assume that r packets of G arrive simultaneously with a packet of G in order to simplify the packet reception timing of G. Policy 2 provides a permanent packet order rule and the same buffering time, regardless of the start time. Figure 7 illustrates the relationship of packet orders. The details of proposed ordering rule are described below.

1762

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Fig. 9. Testbed.

Fig. 7. Relationship of packet orders.

Fig. 8. Example of packet order in proposed rule (r = 3, b = 12, d = 3).

Proposed packet ordering rule:


Let Gi 1, Gi 2, . . . , Gi r be the numbers of r packets in G received simultaneously with a packet Gi of G (i: positive integer). Here, Gi j is denoted by: G 'i j = Gi j * d , (2) for any j = 1, 2, ..., r, where d = [b / (r + 1)]. As shown in Fig. 7, the next packet numbers (Gi+1 1, Gi+1 2, . . . , Gi+1 r ) in G are also determined according to the same formula, where the packet number in G is updated from Gi to Gi+1. The equation: Gi j + 1 = Gi+1 j (3) should be satisfied for any i and j because Gi + 1 = Gi+1. Figure 8 shows an example of packet order for r = 3, b = 12, and d = 3. If an STB starts receiving at either time t1 or t1, the buffering time is identical, i.e., is complete at t2 or t2. Although we assume 1 + r packets (one packet in G and r packets in G) have arrived at the same time, the accelerator server can actually send the r packets of Gi j (j = 1, 2, ..., r) in any order (e.g., in descending order in Fig. 8). In the following, we explain that the set of buffered d (1 + r) ( b) packets has unique and consecutive numbers using Fig. 8, which shows the packet order for r = 3, b = 12, and d = 3. The STB has buffered d packets of G and dr packets of G,

whose numbers are denoted by Gk and Gk j (k = i, i+1, ... , i+d-1 and j = 1, 2, ... , r), respectively. The minimum number of d(1 + r) reception packets is G i r. Applying Equation (3) recursively proves that the d reception packets from Gi r to G consecutive. Moreover, since G i r + d - 1 = G i+d-1 i+d-1 r are r and G i r-1 = G i r + d are lead by Equations (2) and (3), respectively, the following equation is formed: G i+d-1 r + 1 = G i r-1. Therefore, the STB has buffered at least d + 1 consecutive packets from G i r to G i r-1 via G i+d-1 r. In exactly the same manner, the d reception packets from G i r-1 to G i+d-1 r-1 are consecutive, and G i+d-1 r-1 + 1 = G i r-2 is satisfied. The dr packets from G i r to G i+d-1 1 are proven to be consecutive through repetition. We obtain the equation G i+d-1 1 + 1 = G i from G i 1 + d - 1 = G i+d-1 1 and G i = G i 1 + d. Obviously, the d reception packets from G i to G i+d-1 are consecutive. Therefore, the d (1 + r) reception packets were proven to be unique and consecutive. The STB buffered b (= d (1 + r)) consecutive packets as if it normally buffered from only G in the b packet reception time. In the case of [b /( r + 1)] b /( r + 1) , the packet order is exactly the same as the buffer size: b = [b/(r + 1)](r + 1). Since the STB has buffered b (> b) consecutive packet in the same time of d-packet reception, only b - b extra packets are buffered. Finally, we discuss the reduction time for buffering using the proposed scheme. An STB can obtain 1 + r packets from G and G during one packet reception from G. The complete buffering time is d (= [b/(r + 1)]) for the packet reception time of G. Applying the proposed scheme, the buffering time can be reduced to d/b (= [b/(1 + r)]/b 1 /(1 + r ) ) times the buffering time of the normal scheme (only receiving G). IV. EVALUATION

This section evaluates the proposed rapid channel zapping scheme by applying the proposed prototype system to an existing IPTV broadcasting service [4]. Figure 9 illustrates the testbed. The accelerator server is attached to a layer 2 switch to enable IGMP snooping [12] which restricts unwanted multicast flooding. Since the commercially applied STBs in

1763

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Table1. Channel zapping delay (second).

Average Proposed scheme Normal 1.32 1.83

Maximum 2.1 2.2

Minimum 0.8 1.3

(a). Attachment to the same router as media server.

Fig. 10. Reduction time of channel zapping.

[4] cannot handle two multicasts, i.e., original and additional multicasts, and cannot sort packets received in irregular order, we have implemented an STB proxy with the following functions: 1. The STB proxy joins both G and G on detecting a join message to G from the STB, 2. The proxy sorts packets from G and G and then sends them in a burst as multicast stream G. Note that the proxy acts as a layer 2 bridge for other frames. Other conditions are as follows: The rate of G is 6 Mbps (p = 500 (packets per second)), The content is MPEG2 compressed, r = 3, d = 100, and b = 400, and The maximum burst transmission rate from STB proxy to the STB is 42 Mbps (q = 7 multiplied transmission rate by the original transmission rate). The delay increases by the burst transmission time from STB proxy to the STB due to replacement implementation of the proxy. For comparison, the testbed includes two STBs, one of which includes the proxy supporting rapid channel zapping. We verified the zapping delay in each STB by requesting a zap simultaneously using a remote controller. From the recorded video, the zap time was measured as the duration from item 1 to item 2 described below: 1. A channel zap request is issued, 2. The audio/video signal appears on the presentation device. Table 1 presents the results of 200 trials, and Fig. 10 shows the reduction time of each trial in ascending order both with/without proxy cases. The proposed scheme suppresses the zapping delay from 0.8 to 2.1 seconds, while normal

(b). Attachment to edge routers. Fig. 11. Variations of accelerator configuration.

channel zapping takes from 1.3 to 2.2 seconds. The maximum reduction time is 1.1 seconds, and no reduction occurred in four trials. The average reduction time is 0.51 seconds. This variability is thought to depend on the timing between the zap request and I-frame reception. Note that the play-out timings of normal and proposed scheme were well synchronized. V. DISCUSSION

A. Reduction time analysis for prototype We have successfully demonstrated the effectiveness of our proposed scheme, which reduces the zapping delay using a commercial multicast stream by an average of 0.51 seconds. The reduction time, however, is slightly suppressed by the burst transmission time from STB proxy to the STB due to the alternative proxy implementation of the prototype. The theoretical formula {r / (1 + r) 1 / q} b / p of the reduction time is derived from the following terms: r/(1 + r) is the reduction ratio of buffering (= 3/4), b/p is the normal buffering time (= 400/500 seconds), 1/q is the increasing ratio of the burst transmission from STB proxy to the STB (= 1/7). We obtain the theoretical value 0.49 by assigning the parameter values to the formula. Thus, the average reduction time 0.51 of the evaluation agrees closely with the theoretical value 0.49 seconds.

1764

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Fig. 12. Example of multiple acceleration rates (r = 3, b = 12, d = 3, r = 1, n=2) .

accelerator servers are located near the STBs as shown Fig. 11(b), the main advantage is that additional multicast G does not use core network resources. Instead, it is necessary to arrange a large number of accelerator servers corresponding to the number of edge routers. While it is easy to apply our proposed scheme to both configurations considering server/network load, it is difficult to apply the conventional scheme to configuration (a) because the server/network load will increase towards the upstream section as an increasing number of STBs needs to be handle. From the viewpoint of location flexibility for easy deployment, we consider that the proposed scheme is both preferable and promising. D. Extension to multi-rate support for additional stream In this paper, we discussed only the case in which the additional multicast stream is delivered with only one group G. However, it is also possible to divide G into multiple multicast groups in order to realize various zapping rates according to the characteristics of access media. Generally, another acceleration rate r can be realized together with the rate r if any positive integer n satisfies: 1 + r = n(1 + r ' ) . (4), An STB requesting a 1 + r multiplied rate must receive r packets of Gi n, Gi 2n, ..., Gi rn among G at the reception timing of G i . The buffering would be complete in the time it takes to receive nd packets from G. In this case, it is only necessary to divide G into at least two multicast groups. One includes a set of packet sequences Gi n, Gi 2n, ..., Gi rn for i = 1, 2, ... . The other includes the remaining sequences. Here, G should be subdivided into r multicast groups G1 ... Gr in order to realize multiple zapping acceleration. For any j (= 1, 2, ..., r), the packet sequence Gi j (i = 1, 2, ...) should belong to multicast group Gj. Thus, the STB should only join Gn, G2n, ..., Grn as additional multicasts. Figure 12 depicts a division example of G, in which the same parameters as Fig. 8 are applied. In Fig. 12, some STBs may prefer to use r = 1 as the additional multicast rate due to insufficient bandwidth, while the others may decide to use r = 3. The former STBs receive only G2 as the additional multicast, where the packet numbers satisfy the equation G 'i 2 = Gi 2d for any i halving zapping delay, i.e. reception time: t3 t1. This equation is equivalent to Equation (2), where d and r are replaced by nd (= 2d) and r (i.e., j = 1). Other STBs normally achieve 1/4 zapping delay (reception time: t2 t1) by receiving all additional multicasts G1, G2, and G3. We consider such rate variations could be useful for some heterogeneous environment. VI. CONCLUSION REMARKS

B. Bandwidth estimation We estimate the bandwidths of proposed and conventional schemes for comparison. Here, we define R, L, M and u as follows: R: the rate of one channel, L: the number of channels, M: the number of STBs underneath the estimation point, u: the burst transmission rate of conventional scheme. In our proposed scheme, the traffic of two multicasts (G and G) with (1 + r) R transmission rate occurs at a channel zap, while the conventional scheme has a transmission rate of uR. However, the traffic of the proposed scheme does not increase even if M STBs change channel simultaneously. Meanwhile, the conventional scheme increases rate in proportion to M. Thus, the larger M (>> L) is, the more suppression can be realized because total rate of proposed and conventional schemes are (1 + r) LR and uMR, respectively. In addition, from the viewpoint of service and/or network provider, it is a tremendous advantage that total bandwidth demand can be well provisioned in our proposed scheme, where r, L and R are predetermined values, while M could vary considerably depending on the service dissemination. C. Attachment point of the accelerator server(s) Here, we discuss the attachment point of the accelerator servers. Figure 11 present some configuration examples, in which number of STBs located in the same building, are accommodated in the same access line via a layer 2 switch. Since IGMP snooping is enabled in the switch, each STB can receive only the requested multicasts. As extreme examples, we consider the following two configurations for accelerator severs. (a) Single accelerator server is attached to the same router as media server(s). (b) Multiple accelerator servers are attached to the provider edge routers. In configuration (a), the accelerator sever is installed near the media server as shown in Fig. 11 (a). The advantage of (a) is that just one accelerator server can support all the STBs. However, the traffic of additional multicast G traverses the core network. In contrast, in the configuration (b), where

In this paper, we proposed a novel scheme for a rapid

1765

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

channel zapping scheme for an IPTV broadcasting service, which accelerates STBs audio/video data buffering by an additional multicast stream instead of the conventional unicast burst. In order to adopt the timing variation of channel zap on each receiver, we have also developed a rational packet ordering rule for the additional multicast with r multiplied transmission rate. The proposed scheme is superior to the conventional scheme because it makes the server/network load predictable and reduces the channel zapping delay. Through verifications on a prototype system, we have confirmed a maximum 1.1 second reduction in the zapping delay using commercial multicast streams. An average reduction time of 0.51 seconds was achieved, which is almost same as the theoretical value. In addition, we have addressed the possibility of multi-rate support to realize various rates of zapping acceleration. ACKNOWLEDGMENT The authors wish to thank to Dr. S. Akiba and Dr. M. Suzuki of KDDI R&D Laboratories, Inc. for their continuous support and encouragement. REFERENCES
[1] [2] [3] [4] [5] Maligne TV, http://malignetv.orange.fr/. FASTWEB, http://www.fastweb.it/portale/. 4th MEDIA, http://4media.tv/. MOVIE SPLASH, http://www.hikari-one.com/tv/. Forward error correction (FEC) building block, IETF RFC3542, Dec. 2002. [6] The use of forward error correction in reliable multicast, IETF RFC3543, Dec. 2002. [7] D. Singer, N. Farber, Y. Fisher, and J. Fleury, Fast channel changing in rtp, INTERNET STREAMING MEDIA ALLIANCE ISMA, Tech. Rep., 2006. [8] Challenges in the design and deployment of IP video delivery, Cisco Systems, 2006. http://www.seas.upenn.edu/profprog/tcom/documents/Oran_Presentatio n.pdf. [9] ISO/IEC 13818-2 (Mpeg2-Video), Information Technology. Coding of Moving Pictures and Associated Audio for Digital Storage Media at up about 1.5 Mbit/s: Video, 1993. [10] T. Wiegand, Joint Final Committee Draft (JFCD) of Joint Video Specification (ITU-T Rec. H.264 | ISO/IEC 14496-10 AVC), JVT-D157, August 2002. [11] Internet Group Management Protocol, Version 2, IETF RFC2236, Nov. 1997. [12] Catalyst 2960 Switch Software Configuration Guide, Rel. 12.2(40)SE, http://www.cisco.com/en/US/products/ps6406/products_configuration_g uide_book09186a0080875183.html.

1766

Vous aimerez peut-être aussi