Académique Documents
Professionnel Documents
Culture Documents
Sc(IT)
1.1 Introduction
Multimedia has become an inevitable part of any presentation. It has found a variety of applications right from entertainment to education. The evolution of internet has also increased the demand for multimedia content.
Definition
Multimedia is the media that uses multiple forms of information content and information processing (e.g. text, audio, graphics, animation, video, interactivity) to inform or entertain the user. Multimedia also refers to the use of electronic media to store and experience multimedia content. Multimedia is similar to traditional mixed media in fine art, but with a broader scope. The term "rich media" is synonymous for interactive multimedia.
Multimedia Systems- M.Sc(IT)
advertising. Industrial, business to business, and interoffice communications are often developed by creative services firms for advanced multimedia presentations beyond simple slide shows to sell ideas or liven-up training. Commercial multimedia developers may be hired to design for governmental services and nonprofit services applications as well.
Multimedia Systems- M.Sc(IT)
5 Entertainment and Fine Arts In addition, multimedia is heavily used in the entertainment industry, especially to develop special effects in movies and animations. Multimedia games are a popular pastime and are software programs available either as CD-ROMs or online. Some video games also use multimedia features. Multimedia applications that allow users to actively participate instead of just sitting by as passive recipients of information are called Interactive Multimedia. Education In Education, multimedia is used to produce computer-based training courses (popularly called CBTs) and reference books like encyclopaedia and almanacs. A CBT lets the user go through a series of presentations, text about a particular topic, and associated illustrations in various information formats. Edutainment is an informal term used to describe combining education with entertainment, especially multimedia entertainment. Engineering Software engineers may use multimedia in Computer Simulations for anything from entertainment to training such as military or industrial training. Multimedia for software interfaces are often done as collaboration between creative professionals and software engineers. Industry In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and coworkers. Multimedia is also helpful for providing employee training, advertising and selling products all over the world via virtually unlimited web-based technologies. Mathematical and Scientific Research In Mathematical and Scientific Research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia. Medicine In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is affected by diseases spread by viruses and bacteria and then develop techniques to prevent it.
Multimedia Systems- M.Sc(IT)
6 Multimedia in Public Places In hotels, railway stations, shopping malls, museums, and grocery stores, multimedia will become available at stand-alone terminals or kiosks to provide information and help. Such installation reduce demand on traditional information
booths and personnel, add value, and they can work around the clock, even in the middle of the night, when live help is off duty. A menu screen from a supermarket kiosk that provide services ranging from meal planning to coupons. Hotel kiosk list nearby restaurant, maps of the city, airline schedules, and provide guest services such as automated checkout. Printers are often attached so users can walk away with a printed copy of the information. Museum kiosk are not only used to guide patrons through the exhibits, but when installed at each exhibit, provide great added depth, allowing visitors to browser though richly detailed information specific to that display. Check Your Progress 1 List five applications of multimedia Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
7 On the World Wide Web, standards for transmitting virtual reality worlds or scenes in VRML (Virtual Reality Modeling Language) documents (with the file name extension .wrl) have been developed. Using high-speed dedicated computers, multi-million-dollar flight simulators built by singer, RediFusion, and others have led the way in commercial application of VR.Pilots of F-16s, Boeing 777s, and Rockwell space shuttles have made many dry runs before doing the real thing. At the California Maritime academy and other merchant marine officer training schools, computer-controlled simulators teach the intricate loading and unloading of oil tankers and container ships. Specialized public game arcades have been built recently to offer VR combat and flying experiences for a price. From virtual World Entertainment in walnut Greek,
California, and Chicago, for example, BattleTech is a ten-minute interactive video encounter with hostile robots. You compete against others, perhaps your friends, who share coaches in the same containment Bay. The computer keeps score in a fast and sweaty firefight. Similar attractions will bring VR to the public, particularly a youthful public, with increasing presence during the 1990s. The technology and methods for working with three-dimensional images and for animating them are discussed. VR is an extension of multimedia-it uses the basic multimedia elements of imagery, sound, and animation. Because it requires instrumented feedback from a wired-up person, VR is perhaps interactive multimedia at its fullest extension.
8 application meets the objectives of the project. It is also necessary to test whether the multimedia project works properly on the intended deliver platforms and they meet the needs of the clients. 4. Delivering : The final stage of the multimedia application development is to pack the project and deliver the completed project to the end user. This stage has several steps such as implementation, maintenance, shipping and marketing the product.
i) Create the credits for an imaginary multimedia production. Include several outside organizations such as audio mixing, video production, text based dialogues. ii) Review two educational CD-ROMs and enumerate their features.
1.11 References
1. Multimedia Making it work By Tay Vaughan 2. Multimedia in Practice Technology and applications By Jeffcoat
Multimedia Systems- M.Sc(IT)
Lesson 2 Text
Contents
2.0 Aims and Objectives 2.1 Introduction 2.2 Multimedia Building blocks 2.3 Text in multimedia 2.4 About fonts and typefaces 2.5 Computers and Text 2.6 Character set and alphabets 2.7 Font editing and design tools 2.8 Let us sum up 2.9 Lesson-end activities 2.10 Model answers to Check your progress 2.11 References
2.1 Introduction
All multimedia content consists of texts in some form. Even a menu text is accompanied by a single action such as mouse click, keystroke or finger pressed in the monitor (in case of a touch screen). The text in the multimedia is used to communicate
information to the user. Proper use of text and words in multimedia presentation will help the content developer to communicate the idea and message to the user.
10 2. Audio : Sound is perhaps the most element of multimedia. It can provide the listening pleasure of music, the startling accent of special effects or the ambience of a mood-setting background. 3. Images : Images whether represented analog or digital plays a vital role in a multimedia. It is expressed in the form of still picture, painting or a photograph taken through a digital camera. 4. Animation : Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. 5. Video : Digital video has supplanted analog video as the method of choice for making video for multimedia use. Video in multimedia are used to portray real time moving pictures in a multimedia project.
11
FF
(Serif Font) (Sans serif font)
Selecting Text fonts It is a very difficult process to choose the fonts to be used in a multimedia presentation. Following are a few guidelines which help to choose a font in a multimedia presentation. As many number of type faces can be used in a single presentation, this concept of using many fonts in a single page is called ransom-note topography. For small type, it is advisable to use the most legible font. In large size headlines, the kerning (spacing between the letters) can be adjusted In text blocks, the leading for the most pleasing line can be adjusted. Drop caps and initial caps can be used to accent the words. The different effects and colors of a font can be chosen in order to make the text look in a distinct manner. Anti aliased can be used to make a text look gentle and blended. For special attention to the text the words can be wrapped onto a sphere or bent like a wave. Meaningful words and phrases can be used for links and menu items. In case of text links(anchors) on web pages the messages can be accented. The most important text in a web page such as menu can be put in the top 320 pixels. Check Your Progress 1 List a few fonts available in your computer. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
12
quadratic curves outline font methodology, called truetype In addition to printing smooth characters on printers, TrueType would draw characters to a low resolution (72 dpi or 96 dpi) monitor.
13 symbols (Called scripts). A single script can work for tens or even hundreds of languages. Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating in the development of this standard and Microsoft and Apple have incorporated Unicode into their operating system.
1. Fontographer: It is macromedia product, it is a specialized graphics editor for both Macintosh and Windows platforms. You can use it to create postscript, truetype and bitmapped fonts for Macintosh and Windows. 2. Making Pretty Text: To make your text look pretty you need a toolbox full of fonts and special graphics applications that can stretch, shade, color and anti-alias your words into real artwork. Pretty text can be found in bitmapped drawings where characters have been tweaked, manipulated and blended into a graphic image. 3. Hypermedia and Hypertext: Multimedia is the combination of text, graphic, and audio elements into a single collection or presentation becomes interactive multimedia when you give the user some control over what information is viewed and when it is viewed.
Multimedia Systems- M.Sc(IT)
14 When a hypermedia project includes large amounts of text or symbolic content, this content can be indexed and its element then linked together to afford rapid electronic retrieval of the associated information. When text is stored in a computer instead of on printed pages the computers powerful processing capabilities can be applied to make the text more accessible and meaningful. This text can be called as hypertext. 4. Hypermedia Structures: Two Buzzwords used often in hypertext are link and node. Links are connections between the conceptual elements, that is, the nodes that ma consists of text, graphics, sounds or related information in the knowledge base. 5. Searching for words: Following are typical methods for a word searching in hypermedia systems: Categories, Word Relationships, Adjacency, Alternates, Association, Negation, Truncation, Intermediate words, Frequency. Check Your Progress 2 List a few font editing tools. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
iii) The difference between fonts and typefaces iv) Character sets used in computers and their significance v) The font editing software which can be used for creating new fonts and the features of such software.
15 .
2.11 References
1. "Multimedia:Concepts and Practice" By Stephen McGloughlin 2. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 3. Multimedia Making it work By Tay Vaughan 4. Multimedia in Practice Technology and applications By Jeffcoat
Multimedia Systems- M.Sc(IT)
16
Lesson 3 Audio
Contents
3.0 Aims and Objectives 3.1 Introduction 3.2 Power of Sound 3.3 Multimedia Sound Systems 3.4 Digital Audio 3.5 Editing Digital Recordings 3.6 Making MIDI Audio 3.7 Audio File Formats 3.8 Red Book Standard 3.9 Software used for Audio 3.10 Let us sum up
3.11 Lesson-end activities 3.12 Model answers to Check your progress 3.13 References
3.1 Introduction
Sound is perhaps the most important element of multimedia. It is meaningful speech in any language, from a whisper to a scream. It can provide the listening pleasure of music, the startling accent of special effects or the ambience of a moodsetting background. Sound is the terminology used in the analog form, and the digitized form of sound is called as audio.
Multimedia Systems- M.Sc(IT)
17
cassette tapes. The first step is to digitize the analog material and recording it onto a computer readable digital media. It is necessary to focus on two crucial aspects of preparing digital audio files: o Balancing the need for sound quality against your available RAM and Hard disk resources. o Setting proper recording levels to get a good, clean recording.
Multimedia Systems- M.Sc(IT)
18 Remember that the sampling rate determines the frequency at which samples will be drawn for the recording. Sampling at higher rates more accurately captures the high frequency content of your sound. Audio resolution determines the accuracy with which a sound can be digitized. Formula for determining the size of the digital audio Monophonic = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 1 Stereo = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 2 The sampling rate is how often the samples are taken. The sample size is the amount of information stored. This is called as bit resolution. The number of channels is 2 for stereo and 1 for monophonic. The time span of the recording is measured in seconds.
digital audio recording. Sounds can produce a surreal, other wordly effect when played backward. 10. Time Stretching: Advanced programs let you alter the length of a sound file without changing its pitch. This feature can be very useful but watch out: most time stretching algorithms will severely degrade the audio quality. Check Your Progress 1 List a few audio editing features Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
20 A digital audio file format is preferred in the following circumstances: When there is no control over the playback hardware When the computing resources and the bandwidth requirements are high. When dialogue is required.
4. MIDI files used by north Macintosh and Windows 5. *.WMA windows media player 6. *.MP3 MP3 audio 7. *.RA Real Player 8. *.VOC VOC Sound 9. AIFF sound format for Macintosh sound files 10. *.OGG Ogg Vorbis
21
Record an audio clip using sound recorder in Microsoft Windows for 1 minute. Note down the size of the file. Using any audio compression software convert the recorded file to MP3 format and compare the size of the audio.
22 Digital Signal Processing Reversing Sounds Time Stretching 2. The red book standard recommends audio recorded at a sample size of 16 bits and sampling rate of 44.1 KHz. The recording is done with 2 channels(stereo mode).
3.13 References
1. Multimedia Making it work By Tay Vaughan 2. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.
Multimedia Systems- M.Sc(IT)
23
Lesson 4 Images
Contents
4.0 Aims and Objectives 4.1 Introduction 4.2 Digital image 4.3 Bitmaps 4.4 Making still images 4.4.1 Bitmap software 4.4.2 Capturing and editing images 4.5 Vectored drawing 4.6 Color 4.7 Image file formats 4.8 Let us sum up 4.9 Lesson-end activities 4.10 Model answers to Check your progress 4.11 References
representations have been discussed in this lesson. At the end of this lesson the learner will be able to i) Create his own image ii) Describe the use of colors and palettes in multimedia iii) Describe the capabilities and limitations of vector images. iv) Use clip arts in the multimedia presentations
4.1 Introduction
Still images are the important element of a multimedia project or a web site. In order to make a multimedia presentation look elegant and complete, it is necessary to spend ample amount of time to design the graphics and the layouts. Competent, computer literate skills in graphic art and design are vital to the success of a multimedia project.
24 The points at which an image is sampled are known as picture elements, commonly abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we encode here the color of the image). The intensity at each pixel is represented by an integer and is determined from the continuous image by averaging over a small neighborhood around the pixel location. If there are just two intensity values, for example, black, and white, they are represented by the numbers 0 and 1; such images are called binary-valued images. If 8-bit integers are used to store each pixel value, the gray levels range from 0 (black) to 255 (white).
4.3 Bitmaps
A bitmap is a simple information matrix describing the individual dots that are the smallest elements of resolution on a computer screen or other display or printing device. A one-dimensional matrix is required for monochrome (black and white); greater depth (more bits of information) is required to describe more than 16 million colors the picture elements may have, as illustrated in following figure. The state of all the pixels on a computer screen make up the image seen by the viewer, whether in combinations of
black and white or colored pixels in a line of text, a photograph-like picture, or a simple background pattern.
Multimedia Systems- M.Sc(IT)
25 Where do bitmap come from? How are they made? Make a bitmap from scratch with paint or drawing program. Grab a bitmap from an active computer screen with a screen capture program, and then paste into a paint program or your application. Capture a bitmap from a photo, artwork, or a television image using a scanner or video capture device that digitizes the image. Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many creative ways. Clip Art A clip art collection may contain a random assortment of images, or it may contain a series of graphics, photographs, sound, and video related to a single topic. For example, Corel, Micrografx, and Fractal Design bundle extensive clip art collection with their image-editing software. Multiple Monitors When developing multimedia, it is helpful to have more than one monitor, or a single high-resolution monitor with lots of screen real estate, hooked up to your computer. In this way, you can display the full-screen working area of your project or presentation and still have space to put your tools and other menus. This is particularly important in an authoring system such as Macromedia Director, where the edits and changes you make in one window are immediately visible in the presentation window-provided the presentation window is not obscured by your editing tools. Check Your Progress 1 List a few software that can be used for creating images. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 1-bit bitmap 2 colors 4-bit bitmap 16 colors 8-bit bitmap 256 colors
Multimedia Systems- M.Sc(IT)
26
distances. A drawn object can be filled with color and patterns, and you can select it as a single object. Typically, image files are compressed to save memory and disk space; many image formats already use compression within the file itself for example, GIF, JPEG, and PNG. Still images may be the most important element of your multimedia project. If you are designing multimedia by yourself, put yourself in the role of graphic artist and layout designer. 4.4.1 Bitmap Software The abilities and feature of image-editing programs for both the Macintosh and Windows range from simple to complex. The Macintosh does not ship with a painting tool, and Windows provides only the rudimentary Paint (see following figure), so you will need to acquire this very important software separately often bitmap editing or painting programs come as part of a bundle when you purchase your computer, monitor, or scanner. Figure: The Windows Paint accessory provides rudimentary bitmap editing
Multimedia Systems- M.Sc(IT)
27 4.4.2 Capturing and Editing Images The image that is seen on a computer monitor is digital bitmap stored in video memory, updated about every 1/60 second or faster, depending upon monitors scan rate. When the images are assembled for multimedia project, it may often be needed to capture and store an image directly from screen. It is possible to use the Prt Scr key available in the keyboard to capture a image. Scanning Images After scanning through countless clip art collections, if it is not possible to find the unusual background you want for a screen about gardening. Sometimes when you search for something too hard, you dont realize that its right in front of your face. Open the scan in an image-editing program and experiment with different filters, the contrast, and various special effects. Be creative, and dont be afraid to try strange combinations sometimes mistakes yield the most intriguing results.
28
4.6 Color
Color is a vital component of multimedia. Management of color is both a subjective and a technical exercise. Picking the right colors and combinations of colors for your project can involve many tries until you feel the result is right. Understanding Natural Light and Color The letters of the mnemonic ROY G. BIV, learned by many of us to remember the colors of the rainbow, are the ascending frequencies of the visible light spectrum: red, orange, yellow, green, blue, indigo, and violet. Ultraviolet light, on the other hand, is beyond the higher end of the visible spectrum and can be damaging to humans. The color white is a noisy mixture of all the color frequencies in the visible spectrum. The cornea of the eye acts as a lens to focus light rays onto the retina. The light rays stimulate many thousands of specialized nerves called rods and cones that cover the surface of the retina. The eye can differentiate among millions of colors, or hues, consisting of combination of red, green, and blue. Additive Color In additive color model, a color is created by combining colored light sources in three primary colors: red, green and blue (RGB). This is the process used for a TV or computer monitor Subtractive Color In subtractive color method, a new color is created by combining colored media such as paints or ink that absorb (or subtract) some parts of the color spectrum of light and reflect the others back to the eye. Subtractive color is the process used to create color in printing. The printed page is made up of tiny halftone dots of three primary colors, cyan, magenta and yellow (CMY). Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
29
JPEG .jpg Windows Meta file .wmf Portable network graphic .png Compuserve gif .gif Apple Macintosh .pict .pic .pct
30 2. In additive color model, a color is created by combining colored lights sources in three primary colors: red, green and blue (RGB). This is the process used for a TV or computer monitor. In subtractive color method, a new color is created by subtracting colors from , cyan, magenta and yellow (CMY). Subtractive color is the process used to create color in printing.
4.11 References
1. Multimedia Making it work By Tay Vaughan 2. Multimedia in Practice Technology and applications By Jeffcoat
Multimedia Systems- M.Sc(IT)
31
5.3 Animation Techniques 5.4 Animation File formats 5.5 Video 5.6 Broadcast video Standard 5.7 Shooting and editing video 5.8 Video Compression 5.9 Let us sum up 5.10 Lesson-end activities 5.11 Model answers to Check your progress 5.12 References
5.1 Introduction
Animation makes static presentations come alive. It is visual change over time and can add great power to our multimedia projects. Carefully planned, well-executed video clips can make a dramatic difference in a multimedia project. Animation is created from drawn pictures and video is created using real time visuals.
32 Animation is possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon called phi. An object seen by the human eye remains chemically mapped on the eyes retina for a brief time after viewing. Combined with the human minds need to conceptually complete a perceived action, this makes it possible for a series of images that are changed very slightly and very rapidly, one after the other, to seemingly blend together into a visual illusion of movement. The following shows a few cells or frames of a rotating logo. When the images are progressively and rapidly changed, the arrow of the compass is perceived to be spinning. Television video builds entire frames or pictures every second; the speed with which each frame is replaced by the next one makes the images appear to blend smoothly into movement. To make an object travel across the screen while it changes its shape, just change the shape and also move or translate it a few pixels for each frame.
When you create an animation, organize its execution into a series of logical steps. First, gather up in your mind all the activities you wish to provide in the animation; if it is complicated, you may wish to create a written script with a list of activities and required objects. Choose the animation tool best suited for the job. Then build and tweak your sequences; experiment with lighting effects. Allow plenty of time for this phase when you are experimenting and testing. Finally, post-process your animation, doing any special rendering and adding sound effects. 5.3.1 Cel Animation The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been replaced today by acetate or plastic. Cels of famous animated cartoons have become sought-after, suitable-for-framing collectors items. Cel animation artwork begins with keyframes (the first and last frame of an action). For example, when an animated figure of a man walks across the screen, he balances the weight of his entire body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up to support the body. The animation techniques made famous by Disney use a series of progressively different on each frame of movie film which plays at 24 frames per second.
Multimedia Systems- M.Sc(IT)
33 A minute of animation may thus require as many as 1,440 separate frames. The term cel derives from the clear celluloid sheets that were used for drawing each frame, which is been replaced today by acetate or plastic. Cel animation artwork begins with keyframes. 5.3.2 Computer Animation Computer animation programs typically employ the same logic and procedural concepts as cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the vocabulary of classic animators. On the computer, paint is most often filled or drawn with tools using features such as gradients and antialiasing. The word links, in computer animation terminology, usually means special methods for computing RGB pixel values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce special transparencies, inversions, and effects. Computer Animation is same as that of the logic and procedural concepts as cel animation and use the vocabulary of classic cel animation terms such as layer, Keyframe, and tweening. The primary difference between the animation software program is in how much must be drawn by the animator and how much is automatically generated by the software In 2D animation the animator creates an object and describes a path for the object to follow. The software takes over, actually creating the animation on the fly as the program is being viewed by your user. In 3D animation the animator puts his effort in creating the models of individual and designing the characteristic of their shapes and surfaces.
Paint is most often filled or drawn with tools using features such as gradients and anti- aliasing. 5.3.3 Kinematics It is the study of the movement and motion of structures that have joints, such as a walking man. Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects such as hands to arms and define their relationships and limits. Once those relationships are set you can drag these parts around and let the computer calculate the result. 5.3.4 Morphing Morphing is popular effect in which one image transforms into another. Morphing application and other modeling tools that offer this effect can perform transition not only between still images but often between moving images as well.
Multimedia Systems- M.Sc(IT)
34 The morphed images were built at a rate of 8 frames per second, with each transition taking a total of 4 seconds. Some product that uses the morphing features are as follows o Black Belts EasyMorph and WinImages, o Human Softwares Squizz o Valis Groups Flo , MetaFlo, and MovieFlo. Check Your Progress 1 List the different animation techniques Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
35
5.5 Video
Analog versus Digital Digital video has supplanted analog video as the method of choice for making video for multimedia use. While broadcast stations and professional production and postproduction houses remain greatly invested in analog video hardware (according to Sony, there are more than 350,000 Betacam SP devices in use today), digital video gear produces excellent finished products at a fraction of the cost of analog. A digital camcorder directly connected to a computer workstation eliminates the image-degrading analog-to-digital conversion step typically performed by expensive video capture cards, and brings the power of nonlinear video editing and production to everyday users.
36 HDTV High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio (see following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision movies. There is contention between the broadcast and computer industries about whether to use interlacing or progressive-scan technologies. Check Your Progress 2 List the different broadcast video standards and compare their specifications. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
37 Video Tips A useful tool easily implemented in most digital video editing applications is blue screen, Ultimate, or chromo key editing. Blue screen is a popular technique for making multimedia titles because expensive sets are not required. Incredible backgrounds can be generated using 3-D modeling and graphic software, and one or more actors, vehicles, or other objects can be neatly layered onto that background. Applications such as VideoShop, Premiere, Final Cut Pro, and iMovie provide this capability. Recording Formats S-VHS video In S-VHS video, color and luminance information are kept on two separate tracks. The result is a definite improvement in picture quality. This standard is also used in Hi-8. still, if your ultimate goal is to have your project accepted by broadcast stations, this would not be the best choice. Component (YUV) In the early 1980s, Sony began to experiment with a new portable professional video format based on Betamax. Panasonic has developed their own standard based on a similar technology, called MII, Betacam SP has become the industry standard for professional video field recording. This format may soon be eclipsed by a new digital version called Digital Betacam. Digital Video Full integration of motion video on computers eliminates the analog television form of video from the multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-ROM, or other mass-storage device, that clip can be played back on the computers monitor without overlay boards, videodisk players, or second monitors. This playback of digital video is accomplished using software architecture such as QuickTime
or AVI, a multimedia producer or developer; you may need to convert video source material from its still common analog form (videotape) to a digital form manageable by the end users computer system. So an understanding of analog video and some special hardware must remain in your multimedia toolbox. Analog to digital conversion of video can be accomplished using the video overlay hardware described above, or it can be delivered direct to disk using FireWire cables. To repetitively digitize a full-screen color video image every 1/30 second and store it to disk or RAM severely taxes both Macintosh and PC processing capabilitiesspecial hardware, compression firmware, and massive amounts of digital storage space are required.
Multimedia Systems- M.Sc(IT)
38
Two levels of compression and decompression are provided by DVI: Production Level Video (PLV) and Real Time Video (RTV). PLV and RTV both use variable compression
Multimedia Systems- M.Sc(IT)
39 rates. DVIs algorithms can compress video images at ratios between 80:1 and 160:1. DVI will play back video in full-frame size and in full color at 30 frames per second. Optimizing Video Files for CD-ROM CD-ROMs provide an excellent distribution medium for computer-based video: they are inexpensive to mass produce, and they can store great quantities of information. CDROM players offer slow data transfer rates, but adequate video transfer can be achieved by taking care to properly prepare your digital video files. Limit the amount of synchronization required between the video and audio. With Microsofts AVI files, the audio and video data are already interleaved, so this is not a necessity, but with QuickTime files, you should flatten your movie. Flattening means you interleave the audio and video segments together. Use regularly spaced key frames, 10 to 15 frames apart, and temporal compression can correct for seek time delays. Seek time is how long it takes the CD-ROM player to locate specific data on the CD-ROM disc. Even fast 56x drives must spin up, causing some delay (and occasionally substantial noise). The size of the video window and the frame rate you specify dramatically affect performance. In QuickTime, 20 frames per second played in a 160X120-pixel window is equivalent to playing 10 frames per second in a 320X240 window. The more data that has to be decompressed and transferred from the CD-ROM to the screen, the slower the playback.
40
the duration, size of video. Make a list of software that can play these video clips.
5.12 References
1.Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 2. Multimedia Making it work By Tay Vaughan 3. Multimedia in Practice Technology and applications By Jeffcoat 4. http://en.wikipedia.org/wiki/Animation_software
Multimedia Systems- M.Sc(IT)
41
6.1 Introduction
The hardware required for multimedia PC depends on the personal preference, budget, project delivery requirements and the type of material and content in the project. Multimedia production was much smoother and easy in Macintosh than in Windows. But Multimedia content production in windows has been made easy with additional storage and less computing cost. Right selection of multimedia hardware results in good quality multimedia presentation.
42
6.4 SCSI
SCSI (Small Computer System Interface) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners, and optical drives (CD, DVD, etc.). SCSI is most commonly pronounced "scuzzy". Since its standardization in 1986, SCSI has been commonly used in the Apple Macintosh and Sun Microsystems computer lines and PC server systems. SCSI has never been popular in the low-priced IBM PC world, owing to the lower cost and adequate performance of its ATA hard disk standard. SCSI drives and even SCSI RAIDs became common in PC workstations for video or audio production, but the appearance of large cheap SATA drives means that SATA is rapidly taking over this market. Currently, SCSI is popular on high-performance workstations and servers. RAIDs on servers almost always use SCSI hard disks, though a number of manufacturers offer SATA-based RAID systems as a cheaper option. Desktop computers and notebooks more typically use the ATA/IDE or the newer SATA interfaces for hard disks, and USB and FireWire connections for external devices. 6.4.1 SCSI interfaces SCSI is available in a variety of interfaces. The first, still very common, was parallel SCSI (also called SPI). It uses a parallel electrical bus design. The traditional SPI design is making a transition to Serial Attached SCSI, which switches to a serial pointtopoint design but retains other aspects of the technology. iSCSI drops physical
Multimedia Systems- M.Sc(IT)
43 implementation entirely, and instead uses TCP/IP as a transport mechanism. Finally, many other interfaces which do not rely on complete SCSI standards still implement the SCSI command protocol. The following table compares the different types of SCSI.
6.4.2 SCSI cabling Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin connectors. External cables are shielded and only have connectors on the ends.
iSCSI
ISCSI preserves the basic SCSI paradigm, especially the command set, almost unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3 over TCP/IP, as displacing Fibre Channel in the long run, arguing that Ethernet data rates are currently increasing faster than data rates for Fibre Channel and similar disk-attachment technologies. iSCSI could thus address both the low-end and high-end markets with a single commodity-based technology.
Serial SCSI
Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI (SAS) break from the traditional parallel SCSI standards and perform data transfer via serial communications. Although much of the documentation of SCSI talks about the parallel interface, most contemporary development effort is on serial SCSI. Serial SCSI has a number of advantages over parallel SCSIfaster data rates, hot swapping, and improved fault isolation. The primary reason for the shift to serial interfaces is the clock skew issue of high speed parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems caused Terms Bus Speed (MB/sec) Bus Width (Bits) Number of Devices supported SCSI-1 5 8 8 SCSI-2 10 8 8 SCSI-3 20 8 16 SCSI-3 20 8 4 SCSI-3 1 20 16 16 SCSI-3 UW 40 16 16 SCSI-3 UW 40 16 8 SCSI-3 UW 40 16 4 SCSI-3 U2 40 8 8 SCSI-3 U2 80 16 2 SCSI-3 U2W 80 16 16 SCSI-3 U2W 80 16 2 SCSI-3 U3 160 16 16
Multimedia Systems- M.Sc(IT)
44 by cabling and termination. Serial SCSI devices are more expensive than the equivalent parallel SCSI devices. 6.4.3 SCSI command protocol
In addition to many different hardware implementations, the SCSI standards also include a complex set of command protocol definitions. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA Packet Interface, USB Mass Storage class and FireWire SBP-2. In SCSI terminology, communication takes place between an initiator and a target. The initiator sends a command to the target which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters. At the end of the command sequence the target returns a Status Code byte which is usually 00h for success, 02h for an error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain a Key Code Qualifier (KCQ) from the target. The Check Condition and Request Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition. There are 4 categories of SCSI commands: N (non-data), W (writing data from initiator to target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in total, with the most common being: Test unit ready: Queries device to see if it is ready for data transfers (disk spun up, media loaded, etc.). Inquiry: Returns basic device information, also used to "ping" the device since it does not modify sense data. Request sense: Returns any error codes from the previous command that returned an error status. Send diagnostic and Receives diagnostic results: runs a simple self-test or a specialized test defined in a diagnostic page. Start/Stop unit: Spins disks up and down, load/unload media. Read capacity: Returns storage capacity. Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding defective sectors. Read Format Capacities: Read the capacity of the sectors. Read (four variants): Reads data from a device. Write (four variants): Writes data to a device. Log sense: Returns current information from log pages. Mode sense: Returns current device parameters from mode pages. Mode select: Sets device parameters in a mode page.
Multimedia Systems- M.Sc(IT)
45 Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN). Simple devices have just one LUN, more complex devices may have multiple LUNs. A "direct access" (i.e. disk type) storage device consists of a number of logical blocks, usually referred to by the term Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of LBAs has evolved over time and so four different command variants are provided for reading and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long,
Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus various other parameter options. A "sequential access" (i.e. tape-type) device does not have a specific capacity because it typically depends on the length of the tape, which is not known exactly. Reads and writes on a sequential access device happen at the current position, not at a specific LBA. The block size on sequential access devices can either be fixed or variable, depending on the specific device. (Earlier devices, such as 9-track tape, tended to be fixed block, while later types, such as DAT, almost always supported variable block sizes.) 6.4.4 SCSI device identification In the modern SCSI transport protocols, there is an automated process of "discovery" of the IDs. SSA initiators "walk the loop" to determine what devices are there and then assign each one a 7-bit "hop-count" value. FC-AL initiators use the LIP (Loop Initialization Protocol) to interrogate each device port for its WWN (World Wide Name). For iSCSI, because of the unlimited scope of the (IP) network, the process is quite complicated. These discovery processes occur at power-on/initialization time and also if the bus topology changes later, for example if an extra device is added. On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a "SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range 015 on a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for example, the adapter often contains a BIOS program that runs when the computer boots up and that program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively, the host adapter may come with software that must be installed on the host computer to configure the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration (even on a 16 bit bus). The SCSI ID of a device in a drive enclosure that has a backplane is set either by jumpers or by the slot in the enclosure the device is installed into, depending on the model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a backplane often has a switch for each drive to choose the drive's SCSI ID. The enclosure is packaged with connectors that must be plugged into the drive where the jumpers are typically located; the switch emulates the necessary jumpers. While there is no standard
Multimedia Systems- M.Sc(IT)
46 that makes this work, drive designers typically set up their jumper headers in a consistent format that matches the way that these switches implement. Note that a SCSI target device (which can be called a "physical unit") is often divided into smaller "logical units." For example, a high-end disk subsystem may be a single SCSI device but contain dozens of individual disk drives, each of which is a logical unit (more commonly, it is not that simplevirtual disk devices are generated by the subsystem based on the storage in those physical drives, and each virtual disk device is a logical unit). The SCSI ID, WWNN, etc. in this case identifies the whole subsystem, and a second number, the logical unit number (LUN) identifies a disk device within the subsystem. It is quite common, though incorrect, to refer to the logical unit itself as a "LUN." Accordingly, the actual LUN may be called a "LUN number" or "LUN id".
Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside for the Floppy drive while SCSI ID 3 is typically for a CD ROM. 6.4.5 SCSI enclosure services In larger SCSI servers, the disk-drive devices are housed in an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics. Check Your Progress 1 List a few types of SCSI. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
47 The MCI interface is a high-level API developed by Microsoft and IBM for controlling multimedia devices, such as CD-ROM players and audio controllers. The advantage is that MCI commands can be transmitted both from the programming language and from the scripting language (open script, lingo). For a number of years, the MCI interface has been phased out in favor of the DirectX APIs. 6.5.1 MCI Devices The Media Control Interface consists of 4 parts: AVIVideo CDAudio Sequencer WaveAudio Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays avi files, CDAudio plays cd tracks among others. Other MCI devices have also been made available over time. 6.5.2 Playing media through the MCI interface To play a type of media, it needs to be initialized correctly using MCI commands. These commands are subdivided into categories: System Commands Required Commands Basic Commands Extended Commands
6.6 IDE
Usually storage devices connect to the computer through an Integrated Drive Electronics (IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect to a computer. IDE is actually not the true technical name for the interface standard. The original name, AT Attachment (ATA), signified that the interface was initially developed for the IBM AT computer. IDE was created as a way to standardize the use of hard drives in computers. The basic concept behind IDE is that the hard drive and the controller should be combined. The controller is a small circuit board with chips that provide guidance as to exactly how the hard drive stores and accesses data. Most controllers also include some memory that acts as a buffer to enhance hard drive performance. Before IDE, controllers and hard drives were separate and often proprietary. In other words, a controller from one manufacturer might not work with a hard drive from another
Multimedia Systems- M.Sc(IT)
48 manufacturer. The distance between the controller and the hard drive could result in poor signal quality and affect performance. Obviously, this caused much frustration for computer users. IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of the wires laid flat next to each other instead of bunched or wrapped together in a bundle. IDE ribbon cables have either 40 or 80 wires. There is a connector at each end of the cable and another one about two-thirds of the distance from the motherboard connector. This cable cannot exceed 18 inches (46 cm) in total length (12 inches from first to second connector, and 6 inches from second to third) to maintain signal integrity. The three connectors are typically different colors and attach to specific items: The blue connector attaches to the motherboard. The black connector attaches to the primary (master) drive. The grey connector attaches to the secondary (slave) drive. Enhanced IDE (EIDE) an extension to the original ATA standard again developed by Western Digital allowed the support of drives having a storage capacity larger than 504 MiBs (528 MB), up to 7.8 GiBs (8.4 GB). Although these new names originated in branding convention and not as an official standard, the terms IDE and EIDE often appear as if interchangeable with ATA. This may be attributed to the two technologies being introduced with the same consumable devices these "new" ATA hard drives. With the introduction of Serial ATA around 2003, conventional ATA was retroactively renamed to Parallel ATA (P-ATA), referring to the method in which data travels over wires in this interface.
6.7 USB
Universal Serial Bus (USB) is a serial bus standard to interface devices. A major component in the legacy-free PC, USB was designed to allow peripherals to be connected using a single standardized interface socket and to improve plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include providing power to low-consumption devices without the need for an external power supply and allowing many devices to be used without requiring manufacturer specific, individual device drivers to be installed. USB is intended to help retire all legacy varieties of serial and parallel ports. USB
can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For many of those devices USB has become the standard connection method. USB is also used extensively to connect non-networked printers; USB simplifies connecting several printers to one computer. USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles.
Multimedia Systems- M.Sc(IT)
49 The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Apple Inc., Hewlett-Packard, NEC, Microsoft, Intel, and Agere. A USB system has an asymmetric design, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure, subject to a limit of 5 levels of tiers. USB host may have multiple host controllers and each host controller may provide one or more USB ports. Up to 127 devices, including the hub devices, may be connected to a single host controller. USB devices are linked in series through hubs. There always exists one hub known as the root hub, which is built-in to the host controller. So-called "sharing hubs" also exist; allowing multiple computers to access the same peripheral device(s), either switching access between PCs automatically or manually. They are popular in smalloffice environments. In network terms they converge rather than diverge branches. A single physical USB device may consist of several logical sub-devices that are referred to as device functions, because each individual device may provide several functions, such as a webcam (video device function) with a built-in microphone (audio device function). Check Your Progress 2 List the connecting devices discussed in this lesson. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
50
6.11 References
1. The Essential Guide to Computer Data Storage: From Floppy to DVD By Andrei Khurshudov 2. The Scsi Bus and Ide Interface: Protocols, Applications and Programming By Friedhelm Schmidt 3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 4. Multimedia Making it work By Tay Vaughan.
Multimedia Systems- M.Sc(IT)
51
7.1 Introduction
A data storage device is a device for recording (storing) information (data). Recording can be done using virtually any form of energy. A storage device may hold information, process information, or both. A device that only holds information is a
recording medium. Devices that process information (data storage equipment) may both access a separate portable (removable) recording medium or a permanent component to store and retrieve information. Electronic data storage is storage which requires electrical power to store and retrieve that data. Most storage devices that do not require visual optics to read data fall into this category. Electronic data may be stored in either an analog or digital signal format. This type of data is considered to be electronically encoded data, whether or not it is electronically stored. Most electronic data storage media (including some forms of computer storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when power is removed from the device. In contrast, electronically stored information is considered volatile memory.
Multimedia Systems- M.Sc(IT)
52
processor cycles while it swaps needed portions of program code into and out of memory. In some cases, increasing available RAM may show more performance improvement on your system than upgrading the processor clip. On an MPC platform, multimedia authoring can also consume a great deal of memory. It may be needed to open many large graphics and audio files, as well as your
Multimedia Systems- M.Sc(IT)
53 authoring system, all at the same time to facilitate faster copying/pasting and then testing in your authoring software. Although 8MB is the minimum under the MPC standard, much more is required as of now.
54 servers increase the demand for centralized data storage requiring terabytes (1 trillion bytes), hard disks will be configured into fail-proof redundant array offering built-in protection against crashes.
55 Hollywood. DVD also supports Dolby pro-Logic Surround Sound, standard stereo and mono audio. Users can randomly access any section of the disc and use the slow-motion and freeze-frame features during movies. Audio tracks can be programmed for as many as 8 different languages, with graphic subtitles in 32 languages. Some manufactures such
as Toshiba are already providing parental control features in their players (users select lockout ratings from G to NC-17).
7.9 CD Recorders
With a compact disc recorder, you can make your own CDs using special CDrecordable (CD-R) blank optical discs to create a CD in most formats of CD-ROM and CD-Audio. The machines are made by Sony, Phillips, Ricoh, Kodak, JVC, Yamaha, and Pinnacle. Software, such as Adaptecs Toast for Macintosh or Easy CD Creator for Windows, lets you organize files on your hard disk(s) into a virtual structure, then writes them to the CD in that order. CD-R discs are made differently than normal CDs but can play in any CD-Audio or CD-ROM player. They are available in either a 63 minute or 74 minute capacity for the former, that means about 560MB, and for the latter, about 650MB. These write-once CDs make excellent high-capacity file archives and are used extensively by multimedia developers for premastering and testing CDROM projects and titles.
56 Check Your Progress 1 Specify any five storage devices and list their storage capacity. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
The following points have been discussed : RAM is a storage devices for temporary storage which is used to store all the application programs under execution. The secondary storage devices are used to store the data permanently. The storage capacity of secondary storage is more compared with RAM.
7.13 References
1. Multimedia Making it work By Tay Vaughan 2. The Essential Guide to Computer Data Storage: From Floppy to DVD By Andrei Khurshudov 3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 4. Multimedia and Imaging Databases By Setrag Khoshafian, A. Brad Baker
Multimedia Systems- M.Sc(IT)
57
8.1 Introduction
Optical storage devices have become the order of the day. The high storage capacity available in the optical storage devices has influenced it as storage for multimedia content. Apart from the high storage capacity the optical storage devices have
8.2 CD-ROM
A Compact Disc or CD is an optical disc used to store digital data, originally developed for storing digital audio. The CD, available on the market since late 1982, remains the standard playback medium for commercial audio recordings to the present day, though it has lost ground in recent years to MP3 players. An audio CD consists of one or more stereo tracks stored using 16-bit PCM coding at a sampling rate of 44.1 kHz. Standard CDs have a diameter of 120 mm and can hold approximately 80 minutes of audio. There are also 80 mm discs, sometimes used for
Multimedia Systems- M.Sc(IT)
58 CD singles, which hold approximately 20 minutes of audio. The technology was later adapted for use as a data storage device, known as a CD-ROM, and to include recordonce and re-writable media (CD-R and CD-RW respectively). CD-ROMs and CD-Rs remain widely used technologies in the computer industry as of 2007. The CD and its extensions have been extremely successful: in 2004, the worldwide sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion CDs had been sold worldwide. 8.2.1 CD-ROM History In 1979, Philips and Sony set up a joint task force of engineers to design a new digital audio disc. The CD was originally thought of as an evolution of the gramophone record, rather than primarily as a data storage medium. Only later did the concept of an "audio file" arise, and the generalizing of this to any data file. From its origins as a music format, Compact Disc has grown to encompass other applications. In June 1985, the CD-ROM (read-only memory) and, in 1990, CD-Recordable were introduced, also developed by Sony and Philips. 8.2.2 Physical details of CD-ROM A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate plastic and weighs approximately 16 grams. A thin layer of aluminium (or, more rarely, gold, used for its longevity, such as in some limited-edition audiophile CDs) is applied to the surface to make it reflective, and is protected by a film of lacquer. CD data is stored as a series of tiny indentations (pits), encoded in a tightly packed spiral track molded into the top of the polycarbonate layer. The areas between pits are known as "lands". Each pit is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 m in length. The spacing between the tracks, the pitch, is 1.6 m. A CD is read by focusing a 780 nm wavelength semiconductor laser through the bottom of the polycarbonate layer. While CDs are significantly more durable than earlier audio formats, they are susceptible to damage from daily usage and environmental factors. Pits are much closer to the label side of a disc, so that defects and dirt on the clear side can be out of focus during playback. Discs consequently suffer more damage because of defects such as scratches on the label side, whereas clear-side scratches can be repaired by refilling them with plastic of similar index of refraction, or by careful polishing. Disc shapes and diameters The digital data on a CD begins at the center of the disc and proceeds outwards to
the edge, which allows adaptation to the different size formats available. Standard CDs are available in two sizes. By far the most common is 120 mm in diameter, with a 74 or
Multimedia Systems- M.Sc(IT)
59 80-minute audio capacity and a 650 or 700 MB data capacity. 80 mm discs ("Mini CDs") were originally designed for CD singles and can hold up to 21 minutes of music or 184 MB of data but never really became popular. Today nearly all singles are released on 120 mm CDs, which is called a Maxi single.
60 minutes less. A disc with data packed slightly more densely is tolerated by most players (though some old ones fail). Using a linear velocity of 1.2 m/s and a track
pitch of 1.5 m leads to a playing time of 80 minutes, or a capacity of 700 MB. Even higher capacities on non-standard discs (up to 99 minutes) are available at least as recordable, but generally the tighter the tracks are squeezed the worse the compatibility. Data structure The smallest entity in a CD is called a frame. A frame consists of 33 bytes and contains six complete 16-bit stereo samples (2 bytes 2 channels six samples equals 24 bytes). The other nine bytes consist of eight Cross-Interleaved Reed-Solomon Coding error correction bytes and one subcode byte, used for control and display. Each byte is translated into a 14-bit word using Eight-toFourteen Modulation, which alternates with 3-bit merging words. In total we have 33 (14 + 3) = 561 bits. A 27-bit unique synchronization word is added, so that the number of bits in a frame totals 588 (of which only 192 bits are music). These 588-bit frames are in turn grouped into sectors. Each sector contains 98 frames, totaling 98 24 = 2352 bytes of music. The CD is played at a speed of 75 sectors per second, which results in 176,400 bytes per second. Divided by 2 channels and 2 bytes per sample, this result in a sample rate of 44,100 samples per second. "Frame" For the Red Book stereo audio CD, the time format is commonly measured in minutes, seconds and frames (mm:ss:ff), where one frame corresponds to one sector, or 1/75th of a second of stereo sound. Note that in this context, the term frame is erroneously applied in editing applications and does not denote the physical frame described above. In editing and extracting, the frame is the smallest addressable time interval for an audio CD, meaning that track start and end positions can only be defined in 1/75 second steps. Logical structure The largest entity on a CD is called a track. A CD can contain up to 99 tracks (including a data track for mixed mode discs). Each track can in turn have up to 100 indexes, though players which handle this feature are rarely found outside of pro audio, particularly radio broadcasting. The vast majority of songs are recorded under index 1, with the pre-gap being index 0. Sometimes hidden tracks are placed at the end of the last track of the disc, often using index 2 or 3. This is also the case with some discs offering "101 sound effects", with 100 and 101 being index 2 and 3 on track 99. The index, if used, is occasionally put on the track listing as a decimal part of the track number, such as 99.2 or 99.3.
Multimedia Systems- M.Sc(IT)
61 CD-Text CD-Text is an extension of the Red Book specification for audio CD that allows for storage of additional text information (e.g., album name, song name, artist) on a standards-compliant audio CD. The information is stored either in the lead-in area of the CD, where there is roughly five kilobytes of space available, or in the subcode channels R to W on the disc, which can store about 31 megabytes. http://en.wikipedia.org/wiki/Image:CDTXlogo.svg CD + Graphics Compact Disc + Graphics (CD+G) is a special audio compact disc that contains
graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. CD + Extended Graphics Compact Disc + Extended Graphics (CD+EG, also known as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG utilizes basic CD-ROM features to display text and video information in addition to the music being played. This extra data is stored in subcode channels R-W. CD-MIDI Compact Disc MIDI or CD-MIDI is a type of audio CD where sound is recorded in MIDI format, rather than the PCM format of Red Book audio CD. This provides much greater capacity in terms of playback duration, but MIDI playback is typically less realistic than PCM playback. Video CD Video CD (aka VCD, View CD, Compact Disc digital video) is a standard digital format for storing video on a Compact Disc. VCDs are playable in dedicated VCD players, most modern DVD-Video players, and some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC and is referred to as the White Book standard. Overall picture quality is intended to be comparable to VHS video, though VHS has twice as many scanlines (approximately 480 NTSC and 580 PAL) and therefore double the vertical resolution. Poorly compressed video in VCD tends to be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog noise.
Multimedia Systems- M.Sc(IT)
62 Super Video CD Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video on standard compact discs. SVCD was intended as a successor to Video CD and an alternative to DVD-Video, and falls somewhere between both in terms of technical capability and picture quality. SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bitrate, and therefore quality, in order to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring significant quality loss, and many hardware players are unable to play video with an instantaneous bitrate lower than 300 to 600 kilobits per second. Photo CD Photo CD is a system designed by Kodak for digitizing and storing photos in a CD. Launched in 1992, the discs were designed to hold nearly 100 high quality images, scanned prints and slides using special proprietary encoding. Photo CD discs are defined in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players and any computer with the suitable software irrespective of the operating system. The images can also be printed out on photographic paper with a special Kodak machine.
Picture CD Picture CD is another photo product by Kodak, following on from the earlier Photo CD product. It holds photos from a single roll of color film, stored at 10241536 resolution using JPEG compression. The product is aimed at consumers. CD Interactive The Philips "Green Book" specifies the standard for interactive multimedia Compact Discs designed for CD-i players. This Compact Disc format is unusual because it hides the initial tracks which contains the software and data files used by CD-i players by omitting the tracks from the disc's Table of Contents. This causes audio CD players to skip the CD-i data tracks. This is different from the CD-i Ready format, which puts CD-i software and data into the pregap of Track 1.
Multimedia Systems- M.Sc(IT)
63 Enhanced CD Enhanced CD, also known as CD Extra and CD Plus, is a certification mark of the Recording Industry Association of America for various technologies that combine audio and computer data for use in both compact disc and CD-ROM players. The primary data formats for Enhanced CD disks are mixed mode (Yellow Book/Red Book), CD-i, hidden track, and multisession (Blue Book). Recordable CD Recordable compact discs, CD-Rs, are injection moulded with a "blank" data spiral. A photosensitive dye is then applied, after which the discs are metalized and lacquer coated. The write laser of the CD recorder changes the color of the dye to allow the read laser of a standard CD player to see the data as it would an injection moulded compact disc. The resulting discs can be read by most (but not all) CD-ROM drives and played in most (but not all) audio CD players. CD-R recordings are designed to be permanent. Over time the dye's physical characteristics may change, however, causing read errors and data loss until the reading device cannot recover with error correction methods. The design life is from 20 to 100 years depending on the quality of the discs, the quality of the writing drive, and storage conditions. However, testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This process is known as CD rot. CD-Rs follow the Orange Book standard. Recordable Audio CD The Recordable Audio CD is designed to be used in a consumer audio CD recorder, which won't (without modification) accept standard CD-R discs. These consumer audio CD recorders use SCMS (Serial Copy Management System), an early form of digital rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a 3% AHRA royalty used to compensate the music industry for the making of a copy. http://en.wikipedia.org/wiki/Image:CDMSRlogo.svg ReWritable CD CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write laser in this case is used to heat and alter the properties (amorphous vs. crystalline) of the alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players cannot
64 read CD-RW discs, although most later CD audio players and stand-alone DVD players can. CD-RWs follow the Orange Book standard. Check Your Progress 1 List the different CD-ROM formats. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
8.4 DVD
DVD (also known as "Digital Versatile Disc" or "Digital Video Disc") is a popular optical disc storage media format. Its main uses are video and data storage. Most DVDs are of the same dimensions as compact discs (CDs) but store more than 6 times the data. Variations of the term DVD often describe the way data is stored on the discs: DVD-ROM has data which can only be read and not written, DVD-R can be written once and then functions as a DVD-ROM, and DVD-RAM or DVD-RW holds data that can be re-written multiple times. DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured video and audio content. Other types of DVD discs, including those with video content, may be referred to as DVD-Data discs. The term "DVD" is commonly misused to refer to high density optical disc formats in general, such as Blu-ray and HD DVD. "DVD" was originally used as an initialism for the unofficial term "digital video disc". It was reported in 1995, at the time of the specification finalization, that the letters officially stood for "digital versatile disc" (due to non-video applications), however, the text of the press release announcing the specification finalization only refers to the technology as "DVD", making no mention of what (if anything) the letters stood for. Usage in the present day varies, with "DVD", "Digital Video Disc", and "Digital Versatile Disc" all being common.
Multimedia Systems- M.Sc(IT)
65 8.4.1 DVD disc capacity Single layer capacity Dual/Double layer capacity Physical size GB GiB GB GiB 12 cm, single sided 4.7 4.37 8.54 7.95 12 cm, double sided 9.4 8.74 17.08 15.90 8 cm, single sided 1.4 1.30 2.6 2.42 8 cm, double sided 2.8 2.61 5.2 4.84 The 12 cm type is a standard DVD, and the 8 cm variety is known as a mini-DVD. These are the same sizes as a standard CD and a mini-CD. Note: GB here means gigabyte, equal to 109 (or 1,000,000,000) bytes. Many programs will display gibibyte (GiB), equal to 230 (or 1,073,741,824) bytes. Example: A disc with 8.5 GB capacity is equivalent to: (8.5 1,000,000,000) / 1,073,741,824 7.92 GiB.
Capacity Note: There is a difference in capacity (storage space) between + and - DL DVD formats. For example, the 12 cm single sided disc has capacities: Disc Type Sectors bytes GB GiB DVD-R SL 2,298,496 4,707,319,808 4.7 4.384 DVD+R SL 2,295,104 4,700,372,992 4.7 4.378 DVD-R DL 4,171,712 8,543,666,176 8.5 7.957 DVD+R DL 4,173,824 8,547,991,552 8.5 7.961
Multimedia Systems- M.Sc(IT)
66
Technology
DVD uses 650 nm wavelength laser diode light as opposed to 780 nm for CD. This permits a smaller spot on the media surface that is 1.32 m for DVD while it was 2.11 m for CD. Writing speeds for DVD were 1x, that is 1350 kB/s (1318 KiB/s), in first drives and media models. More recent models at 18x or 20x have 18 or 20 times that speed. Note that for CD drives, 1x means 153.6 kB/s (150 KiB/s), 9 times slower. DVD FAQ 8.4.2 DVD recordable and rewritable HP initially developed recordable DVD media from the need to store data for back-up and transport. DVD recordables are now also used for consumer audio and video recording. Three formats were developed: DVD-R/RW (minus/dash), DVD+R/RW (plus), DVD-RAM. Dual layer recording Dual Layer recording allows DVD-R and DVD+R discs to store significantly more data, up to 8.5 Gigabytes per side, per disc, compared with 4.7 Gigabytes for singlelayer discs. DVD-R DL was developed for the DVD Forum by Pioneer Corporation, DVD+R DL was developed for the DVD+RW Alliance by Philips and Mitsubishi Kagaku Media (MKM). A Dual Layer disc differs from its usual DVD counterpart by employing a second physical layer within the disc itself. The drive with Dual Layer capability accesses the second layer by shining the laser through the first semi-transparent layer. The layer change mechanism in some DVD players can show a noticeable pause, as long as two seconds by some accounts. This caused more than a few viewers to worry that their dual layer discs were damaged or defective, with the end result that studios began listing a standard message explaining the dual layer pausing effect on all dual layer disc packaging. DVD recordable discs supporting this technology are backward compatible with some existing DVD players and DVD-ROM drives. Many current DVD recorders support dual-layer technology, and the price is now comparable to that of single-layer drives, though the blank media remain more expensive. DVD-Video DVD-Video is a standard for storing video content on DVD media. Though many resolutions and formats are supported, most consumer DVD-Video discs use either 4:3 or anamorphic 16:9 aspect ratio MPEG-2 video, stored at a resolution
Multimedia Systems- M.Sc(IT)
67
of 720480 (NTSC) or 720576 (PAL) at 24, 30, or 60 FPS. Audio is commonly stored using the Dolby Digital (AC-3) or Digital Theater System (DTS) formats, ranging from 16-bits/48kHz to 24-bits/96kHz format with monaural to 7.1 channel "Surround Sound" presentation, and/or MPEG-1 Layer 2. Although the specifications for video and audio requirements vary by global region and television system, many DVD players support all possible formats. DVD-Video also supports features like menus, selectable subtitles, multiple camera angles, and multiple audio tracks. DVD-Audio DVD-Audio is a format for delivering high-fidelity audio content on a DVD. It offers many channel configuration options (from mono to 7.1 surround sound) at various sampling frequencies (up to 24-bits/192kHz versus CDDA's 16-bits/44.1kHz). Compared with the CD format, the much higher capacity DVD format enables the inclusion of considerably more music (with respect to total running time and quantity of songs) and/or far higher audio quality (reflected by higher linear sampling rates and higher vertical bitrates, and/or additional channels for spatial sound reproduction). Despite DVD-Audio's superior technical specifications, there is debate as to whether the resulting audio enhancements are distinguishable in typical listening environments. DVD-Audio currently forms a niche market, probably due to the very sort of format war with rival standard SACD that DVD-Video avoided. Check Your Progress 1 Specify the different storage capacities available in a DVD Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 8.4.3 Security in DVD DVD-Audio discs employ a robust copy prevention mechanism, called Content Protection for Prerecorded Media (CPPM) developed by the 4C group (IBM, Intel, Matsushita, and Toshiba). To date, CPPM has not been "broken" in the sense that DVD-Video's CSS has been broken, but ways to circumvent it have been developed. By modifying commercial DVD(-Audio) playback software to write the decrypted and decoded audio streams to the hard disk, users can, essentially, extract content from DVD-Audio discs much in the same way they can from DVD-Video discs.
Multimedia Systems- M.Sc(IT)
68 8.4.4 Competitors and successors to DVD There are several possible successors to DVD being developed by different consortia. Sony/Panasonic's Blu-ray Disc (BD) and Toshiba's HD DVD and 3D optical data storage are being actively developed. The next generation of DVD will be HD DVD.
CD-ROM and DVD are optical storage devices. A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate plastic and weighs approximately 16 grams. DVD (also known as "Digital Versatile Disc" or "Digital Video Disc") is a popular optical disc storage media format. The different formats of CD-ROM includes ReWritable CD Recordable Audio CD Recordable CD CD Interactive Enhanced CD Picture CD Photo CD Super Video CD Video CD CD-MIDI CD + Graphics CD + Extended Graphics CD-Text
69 2. The storage capacities of DVD are - DVD-R SL-4,707,319,808 bytes - 4.7 GB - DVD+R SL-4,700,372,9924,707,319,808 bytes -4.7 GB - DVD-R DL-8,543,666,1764,707,319,808 bytes -8.5 GB
8.8 References
1. Multimedia Making it work By Tay Vaughan 2. Multimedia in Practice Technology and applications By Jeffcoat 3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.
Multimedia Systems- M.Sc(IT)
70
9.1 Introduction
An input device is a hardware mechanism that transforms information in the external world for consumption by a computer. An output device is a hardware used to communicate the result of data processing carried out by the user or CPU.
71
Input devices can be classified according to: the modality of input (e.g. mechanical motion, audio, visual, sound, etc.) whether the input is discrete (e.g. keypresses) or continuous (e.g. a mouse's position, though digitized into a discrete quantity, is high-resolution enough to be thought of as continuous) the number of degrees of freedom involved (e.g. many mice allow 2D positional input, but some devices allow 3D input, such as the Logitech Magellan Space Mouse) Pointing devices, which are input devices used to specify a position in space, can further be classified according to Whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the cursor appears. Touchscreens and light pens involve direct input. Examples involving indirect input include the mouse and trackball. Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g. with a mouse that can be lifted and repositioned) Note that direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing graphics tablets that do not have an embedded screen involve indirect input, and sense absolute positions and are often run in an absolute input mode, but they may also be setup to simulate a relative input mode where the stylus or puck can be lifted and repositioned.
9.2.2 Keyboards
A keyboard is the most common method of interaction with a computer. Keyboards provide various tactile responses (from firm to mushy) and have various layouts depending upon your computer system and keyboard model. Keyboards are typically rated for at least 50 million cycles (the number of times a key can be pressed before it might suffer breakdown). The most common keyboard for PCs is the 101 style (which provides 101 keys), although many styles are available with more are fewer special keys, LEDs, and others features, such as a plastic membrane cover for industrial or food-service applications or flexible ergonomic styles. Macintosh keyboards connect to the Apple Desktop Bus (ADB), which manages all forms of user input- from digitizing tablets to mice. Examples of types of keyboards include Computer keyboard Keyer
Multimedia Systems- M.Sc(IT)
While the most common pointing device by far is the mouse, many more devices have been developed. However, mouse is commonly used as a metaphor for devices that move the cursor. A mouse is the standard tool for interacting with a graphical user interface (GUI). All Macintosh computers require a mouse; on PCs, mice are not required but recommended. Even though the Windows environment accepts keyboard entry in lieu of mouse point-and-click actions, your multimedia project should typically be designed with the mouse or touchscreen in mind. The buttons the mouse provide additional user input, such as pointing and double-clicking to open a document, or the click-and-drag operation, in which the mouse button is pressed and held down to drag (move) an object, or to move to and select an item on a pull-down menu, or to access context-sensitive help. The Apple mouse has one button; PC mice may have as many as three. Examples of common pointing devices include mouse trackball touchpad spaceBall - 6 degrees-of-freedom controller touchscreen graphics tablets (or digitizing tablet) that use a stylus light pen light gun eye tracking devices steering wheel - can be thought of as a 1D pointing device yoke (aircraft) jog dial - another 1D pointing device isotonic joysticks - where the user can freely change the position of the stick, with more or less constant force o joystick o analog stick isometric joysticks - where the user controls the stick by varying the amount of force they push with, and the position of the stick remains more or less constant o pointing stick
Multimedia Systems- M.Sc(IT)
73 discrete pointing devices o directional pad - a very simple keyboard o dance pad - used to point at gross locations in space with feet
device that could be thought of as a composite device. Many gaming devices have controllers like this. Game controller Gamepad (or joypad) Paddle (game controller) Wii Remote
74 Be aware that scanned images, particularly those at high resolution and in color, demand an extremely large amount of storage space on the hard disk, no matter what instrument is used to do the scanning. Also remember that the final monitor display resolution for your multimedia project will probably be just 72 or 95 dpi-leave the very expensive ultra-high-resolution scanners for the desktop publishers. Most expensive flat-bed scanners offer at least 300 dpi resolution, and most scanners allow to set the scanning resolution. Scanners helps make clear electronic images of existing artwork such as photos, ads, pen drawings, and cartoons, and can save many hours when you are incorporating proprietary art into the application. Scanners also give a starting point for the creative diversions. The devices used for capturing image and video are: Webcam Image scanner Fingerprint scanner Barcode reader 3D scanner medical imaging sensor technology o Computed tomography o Magnetic resonance imaging o Positron emission tomography o Medical ultrasonography
9.2.7 Touchscreens
Touchscreens are monitors that usually have a textured coating across the glass
face. This coating is sensitive to pressure and registers the location of the users finger when it touches the screen. The Touch Mate System, which has no coating, actually measures the pitch, roll, and yaw rotation of the monitor when pressed by a finger, and determines how much force was exerted and the location where the force was applied. Other touchscreens use invisible beams of infrared light that crisscross the front of the monitor to calculate where a finger was pressed. Pressing twice on the screen in quick and dragging the finger, without lifting it, to another location simulates a mouse clickanddrag. A keyboard is sometimes simulated using an onscreen representation so users can input names, numbers, and other text by pressing keys.
Multimedia Systems- M.Sc(IT)
75 Touchscreen recommended for day-to-day computer work, but are excellent for multimedia applications in a kiosk, at a trade show, or in a museum delivery systemanything involving public input and simple tasks. When your project is designed to use a touchscreen, the monitor is the only input device required, so you can secure all other system hardware behind locked doors to prevent theft or tampering. Check Your Progress 1 List a few input devices. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
The monitor needed for development of multimedia projects depends on the type of multimedia application created, as well as what computer is being used. A wide variety of monitors is available for both Macintoshes and PCs. High-end, large-screen graphics monitors are available for both, and they are expensive.
Multimedia Systems- M.Sc(IT)
76 Serious multimedia developers will often attach more than one monitor to their computers, using add-on graphic board. This is because many authoring systems allow to work with several open windows at a time, so we can dedicate one monitor to viewing the work we are creating or designing, and we can perform various editing tasks in windows on other monitors that do not block the view of your work. Editing windows that overlap a work view when developing with Macromedias authoring environment, director, on one monitor. Developing in director is best with at least two monitors, one to view the work the other two view the score. A third monitor is often added by director developers to display the Cast. 9.3.4 Video Device No other contemporary message medium has the visual impact of video. With a video digitizing board installed in a computer, we can display a television picture on your monitor. Some boards include a frame-grabber feature for capturing the image and turning it in to a color bitmap, which can be saved as a PICT or TIFF file and then used as part of a graphic or a background in your project. Display of video on any computer platform requires manipulation of an enormous amount of data. When used in conjunction with videodisc players, which give precise control over the images being viewed, video cards you place an image in to a window on the computer monitor; a second television screen dedicated to video is not required. And video cards typically come with excellent special effects software. There are many video cards available today. Most of these support various videoina-window sizes, identification of source video, setup of play sequences are segments, special effects, frame grabbing, digital movie making; and some have built-in television tuners so you can watch your favorite programs in a window while working on other things. In windows, video overlay boards are controlled through the Media Control Interface. On the Macintosh, they are often controlled by external commands and functions (XCMDs and XFCNs) linked to your authoring software. Good video greatly enhances your project; poor video will ruin it. Whether you delivered your video from tape using VISCA controls, from videodisc, or as a QuickTime or AVI movie, it is important that your source material be of high quality. 9.3.5 Projectors When it is necessary to show a material to more viewers than can huddle around a computer monitor, it will be necessary to project it on to large screen or even a whitepainted wall. Cathode-ray tube (CRT) projectors, liquid crystal display (LCD) panels attached to an overhead projector, stand-alone LCD projectors, and light-valve projectors are available to splash the work on to big-screen surfaces. CRT projectors have been around for quite a while- they are the original bigscreen televisions. They use three separate projection tubes and lenses (red, green, and blue), and three color channels of light must converge accurately on the screen. Setup, focusing, and aligning are important to getting a clear and crisp picture. CRT projectors
are compatible with the output of most computers as well as televisions. LCD panels are portable devices that fit in a briefcase. The panel is placed on the glass surface of a standard overhead projector available in most schools, conference
Multimedia Systems- M.Sc(IT)
77 rooms, and meeting halls. While they overhead projectors does the projection work, the panel is connected to the computer and provides the image, in thousands of colors and, with active-matrix technology, at speeds that allow full-motion video and animation. Because LCD panels are small, they are popular for on-the-road presentations, often connected to a laptop computer and using a locally available overhead projector. More complete LCD projection panels contain a projection lamp and lenses and do not recover a separate overheads projector. They typically produce an image brighter and shaper than the simple panel model, but they are some what large and cannot travel in a briefcase. Light-valves complete with high-end CRT projectors and use a liquid crystal technology in which a low-intensity color image modulates a high-intensity light beam. These units are expensive, but the image from a light-valve projector is very bright and color saturated can be projected onto screen as wide as 10 meters. 9.3.6 Printers With the advent of reasonably priced color printers, hard-copy output has entered the multimedia scene. From storyboards to presentation to production of collateral marketing material, color printers have become an important part of the multimedia development environment. Color helps clarify concepts, improve understanding and retention of information, and organize complex data. As multimedia designers already know intelligent use of colors is critical to the success of a project. Tektronix offers both solid ink and laser options, and either Phases 560 will print more than 10000 pages at a rate of 5 color pages or 14 monochrome pages per minute before requiring new toner. Epson provides lower-cost and lower-performance solutions for home and small business users; Hewlett Packards Color LaserJet line competes with both. Most printer manufactures offer a color model-just as all computers once used monochrome monitors but are now color, all printers will became color printers. Check Your Progress 2 List a few output devices. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
78
connected on a local area network (LAN). The clients computers, however, may be thousands of miles distant, requiring other methods for good communication. Communication among workshop members and with the client is essential to the efficient and accurate completion of project. And when speedy data transfer is needed, immediately, a modem or network is required. If the client and the service provider are both connected to the Internet, a combination of communication by e-mail and by FTP (File Transfer Protocol) may be the most cost-effective and efficient solution for both creative development and project management. In the workplace, it is necessary to use quality equipment and software for the communication setup. The cost-in both time and money-of stable and fast networking will be returned to the content developer. 9.4.1 Modems Modems can be connected to the computer externally at the port or internally as a separate board. Internal modems often include fax capability. Be sure your modem is Hayes-compatible. The Hayes AT standard command set (named for the ATTENTION command that precedes all other commands) allows to work with most software communications packages. Modem speed, measured in baud, is the most important consideration. Because the multimedia file that contains the graphics, audio resources, video samples, and progressive versions of your project are usually large, you need to move as much data as possible in as short a time as possible. Todays standards dictate at least a V.34 28,800 bps modem. Transmitting at only 2400 bps, a 350KB file may take as long as 45 minutes to send, but at 28.8 kbps, you can be done in a couple of minutes. Most modems follows the CCITT V.32 or V.42 standards that provide data compression algorithms when communicating with another similarly equipped modem. Compression saves significant transmission time and money, especially over long distance. Be sure the modem uses a standard compression system (like V.32), not a proprietary one. According to the laws of physics, copper telephone lines and the switching equipment at the phone companies central offices can handle modulated analog signals up to about 28,000 bps on clean lines. Modem manufactures that advertise data transmission speeds higher than that (56 Kbps) are counting on their hardware-based compression algorithms to crunch the data before sending it, decompressing it upon arrival at the receiving end. If we have already compressed the data into a .SIT, .SEA, .ARC, or .ZIP file, you may not reap any benefit from the higher advertised speeds
Multimedia Systems- M.Sc(IT)
79 because it is difficult to compress an already-compressed file. New highspeed/hightransmission over telephone lines are on the horizon. 9.4.2 ISDN For higher transmission speeds, you will need to use Integrated Services Digital Network (ISDN), Switched-56, T1, T3, DSL, ATM, or another of the telephone companies Digital Switched Network Services. ISDN lines are popular because of their fast 128 Kbps data transfer rate-four to five times faster than the more common 28.8 Kbps analog modem. ISDN lines (and the required ISDN hardware, often misnamed ISDN modems even though no modulation/demodulation of the analog signal occurs) are important for Internet access,
networking, and audio and video conferencing. They are more expensive than conventional analog or POTS (Plain Old Telephone Service) lines, so analyze your costs and benefits carefully before upgrading to ISDN. Newer and faster Digital Subscriber Line (DSL) technology using copper lines and promoted by the telephone companies may overtake ISDN. 9.4.3 Cable Modems In November 1995, a consortium of cable television industry leaders announced agreement with key equipment manufacturers to specify some of the technical ways cable networks and data equipment talk with one another. 3COM, AT&T, COM21, General Instrument, Hewlett Packard, Hughes, Hybrid, IBM, Intel, LANCity, MicroUnity, Motorola, Nortel, Panasonic, Scientific Atlanta, Terrayon, Toshiba, and Zenith currently supply cable modem products. While the cable television networks cross 97 percent of property lines in North America, each local cable operator may use different equipment, wires, and software, and cable modems still remain somewhat experimental. This was a call for interoperability standards. Cable modems operate at speeds 100 to 1,000 times as fast as a telephone modem, receiving data at up to 10Mbps and sending data at speeds between 2Mbps and 10 Mbps. They can provide not only high-bandwidth Internet access but also streaming audio and video for television viewing. Most will connect to computers with 10baseT Ethernet connectors. Cable modems usually send and receive data asymmetrically they receive more (faster) than they send (slower). In the downstream direction from provider to user, the date are modulated and placed on a common 6 MHz television carrier, somewhere between 42 MHz and 750 MHz. the upstream channel, or reverse path, from the user back to the provider is more difficult to engineer because cable is a noisy environmentwith interference from HAM radio, CB radio, home appliances, loose connectors, and poor home installation.
Multimedia Systems- M.Sc(IT)
80
2. The following is the list of Output devices. A few of them can be listed. Printer, Speaker, Plotter, Monitor, Projectors
9.8 References
1. Multimedia Making it work By Tay Vaughan 2. Multimedia in Practice Technology and applications By Jeffcoat 3. http://www.wacona.com/input/input.html 4. www.webopedia.com/TERM/I/input_device.html 5. http://en.wikipedia.org/wiki/Input_device
Multimedia Systems- M.Sc(IT)
81
Lesson 10 MultimediaWorkstation
Contents
10.0 Aims and Objectives 10.1 Introduction 10.2 Communication Architecture 10.3 Hybrid Systems 10.4 Digital System 10.5 Multimedia Workstation 10.6 Preference of Operating System for workstation 10.7 Let us sum up 10.8 Lesson-end activities 10.9 Model answers to Check your progress 10.10 References
10.1 Introduction
A multimedia workstation is computer with facilities to handle multimedia objects such as text, audio, video, animation and images. A multimedia workstation was earlier identified as MPC (Multimedia Personal Computer). In the current scenario all computers are prebuilt with multimedia processing facilities. Hence it is not necessary to identify a computer as MPC. A multimedia system is comprised of both hardware and software components, but the major driving force behind a multimedia development is research and development in hardware capabilities. Besides the multimedia hardware capabilities of current personal computers (PCs) and workstations, computer networks with their increasing throughput and speed start to offer services which support multimedia communication systems. Also in this area, computer networking technology advances faster than the software.
82
other. However, the transmission of audio and video cannot be carried out with only the conventional communication infrastructure and network adapters. Until now, the solution was that continuous and discrete media have been considered in different environments, independently of each other. It means that fully different systems were built. For example, on the one hand, the analog telephone system provides audio transmission services using its original dial devices connected by copper wires to the telephone companys nearest end office. The end offices are connected to switching centers, called toll offices, and these centers are connected through high bandwidth intertoll trunks to intermediate switching offices. This hierarchical structure allows for reliable audio communication. On the other hand, digital computer networks provide data transmission services at lower data rates using network adapters connected by copper wires to switches and routers. Even today, professional radio and television studios transmit audio and video streams in the form of analog signals, although most network components (e.g., switches), over which these signals are transmitted, work internally in a digital mode.
83 Integrated Transmission The next possibility to integrated digital and analog components is to provide a common transmission network. This implies that external analog audio-video devices are connected to computers using A/D (D/A) converters outside of the computer, not only for control, but also for processing purposes. Continuous data are transmitted over shared data networks.
or digital. Connection to switches Another possibility to connect audio-video devices to a digital network is to connect them directly to the network switches.
84 Bus Within current workstations, data are transmitted over the traditional asynchronous bus, meaning that if audio-video devices are connected to a workstation, continuous data are processed in a workstation, and the data transfer is done over this bus, which provides low and unpredictable time guarantees. In multimedia workstations, in addition to this bus, the data will be transmitted over a second bus which can keep time guarantees. In later technical implementations, a bus may be developed which transmits two kinds of data according to their requirements (this is known as a multi-bus system). The notion of a bus has to be divided into system bus and periphery bus. In their current versions, system busses such as ISA, EISA, Microchannel, Q-bus and VME-bus support only limited transfer of continuous data. The further development of periphery busses, such as SCSI, is aimed at the development of data transfer for continuous media. Multimedia Devices The main peripheral components are the necessary input and output multimedia devices. Most of these devices were developed for or by consumer electronics, resulting in the relative low cost of the devices. Microphones, headphones, as well as passive and active speakers, are examples. For the most part, active speakers and headphones are connected to the computer because it, generally, does not contain an amplifier. The camera for video input is also taken from consumer electronics. Hence, a video interface in a computer must accommodate the most commonly used video techniques/standards,
i.e., NTSC, PAL, SECAM with FBAS, RGB, YUV and YIQ modes. A monitor serves for video output. Besides Cathode Ray Tube (CRT) monitors (e.g., current workstation terminals), more and more terminals use the color-LCD technique (e.g., a projection TV monitor uses the LCD technique). Further, to display video, monitor characteristics, such as color, high resolution, and flat and large shape, are important. Primary Storage Audio and video data are copied among different system components in a digital system. An example of tasks, where copying of data is necessary, is a segmentation of the LDUs or the appending of a Header and Trailer. The copying operation uses system software-specific memory management designed for continuous media. This kind of memory management needs sufficient main memory (primary storage). Besides ROMs, PROMs, EPROMS, and partially static memory elements, low-cost of these modules, together with steadily increasing storage capacities, profits the multimedia world. Secondary Storage The main requirements put on secondary storage and the corresponding controller is a high storage density and low access time, respectively.
Multimedia Systems- M.Sc(IT)
85 On the one hand, to achieve a high storage density, for example, a Constant Linear Velocity (CLV) technique was defined for the CD-DA (Compact Disc Digital Audio). CLV guarantees that the data density is kept constant for the entire optical disk at the expense of a higher mean access time. On the other hand, to achieve time guarantees, i.e., lower mean access time, a Constant Angle Velocity (CAV) technique could be used. Because the time requirement is more important, the systems with a CAV are more suitable for multimedia than the systems with a CLV. Processor In a multimedia workstation, the necessary work is distributed among different processors. Although currently, and for the near future, this does not mean that all multimedia workstations must be multi-processor systems. The processors are designed for different tasks. For example, a Dedicated Signal Processor (DSP) allows compression and decompression of audio in real-time. Moreover, there can be special-purpose processors employed for video. The following Figure shows an example of a multiprocessor for multimedia workstations envisioned for the future. Operating System Another possible variant to provide computation of discrete and continuous data in a multimedia workstation could be distinguishing between processes for discrete data computation and for continuous data processing. These processes could run on separate processors. Given an adequate operating system, perhaps even one processor could be shared according to the requirements between processes for discrete and continuous data. Vector 1 Vector 2 Vector 3 CPU 1 CPU 2 CPU 3 CPU 4
Bus Interface DVI Technology
Cache
86 Check Your Progress 1 List a few components required for a multimedia workstation. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. __________
87 10.6.3 Networking Macintosh andWindows Computers When a user works in a multimedia development environment consisting of a mixture of Macintosh and Windows computers, you will want them to communicate with each other. It may also be necessary to share other resources among them, such as
printers. Local area networks (LANs) and wide area networks (WANs) can connect the members of a workgroup. In a LAN, workstations are usually located within a short distance of one another, on the same floor of a building, for example. WANs are communication systems spanning great distances, typically set up and managed by large corporation and institutions for their own use, or to share with other users. LANs allow direct communication and sharing of peripheral resources such as file servers, printers, scanners, and network modems. They use a variety of proprietary technologies, most commonly Ethernet or TokenRing, to perform the connections. They can usually be set up with twisted-pair telephone wire, but be sure to use data-grade level 5 or cat-5 wire-it makes a real difference, even if its a little more expensive! Bad wiring will give the user never-ending headache of intermittent and often untraceable crashes and failures.
10.10 References
1. "Multimedia:Concepts and Practice" By Stephen McGloughlin 2. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 3. Digital Multimedia by Nigel Chapman, Jenny Chapman 4. Video and Image Processing in Multimedia Systems By Stephen W. Smoliar, HongJiang Zhang
Multimedia Systems- M.Sc(IT)
88
11.9 Lesson-end activities 11.10 Model answers to Check your progress 11.11 References
11.1 Introduction
A document consists of a set of structural information that can be in different forms of media, and during presentation they can be generated or recorded. A document is aimed at the perception of a human, and is accessible for computer processing.
11.2 Documents
A multimedia document is a document which is comprised of information coded in at least one continuous (time-dependent) medium and in one discrete (timeindependent) medium. Integration of the different media is given through a close relation between information units. This is also called synchronization. A multimedia document is closely related to its environment of tools, data abstractions, basic concepts and document architecture. 11.2.1 Document Architecture: Exchanging documents entails exchanging the document content as well as the document structure. This requires that both documents have the same document
Multimedia Systems- M.Sc(IT)
89 architecture. The current standardized, respectively in the progress of standardization, architectures are the Standard Generalized Markup Language(SGML) and the Open Document Architecture(ODA). There are also proprietary document architectures, such as DECs Document Content Architecture (DCA) and IBMs Mixed Object Document Content Architecture (MO:DCA). Information architectures use their data abstractions and concepts. A document architecture describes the connections among the individual elements represented as models (e.g., presentation model, manipulation model). The elements in the document architecture and their relations are shown in the following Figure. The Figure shows a multimedia document architecture including relations between individual discrete media units and continuous media units. The manipulation model describes all the operations allowed for creation, change and deletion of multimedia information. The representation model defines: (1) the protocols for exchanging this information among different computers; and, (2) the formats for storing the data. It includes the relations between the individual information elements which need to be considered during presentation. It is important to mention that an architecture may not include all described properties, respectively models. Document architecture and its elements.
11.3 HYPERTEXT
Hypertext most often refers to text on a computer that will lead the user to other, related information on demand. Hypertext represents a relatively recent innovation to
Content Structure
RepresentationModel Manipulation Model Presentation Model
90 user interfaces, which overcomes some of the limitations of written text. Rather than remaining static like traditional text, hypertext makes possible a dynamic organization of information through links and connections (called hyperlinks). Hypertext can be designed to perform various tasks; for instance when a user "clicks" on it or "hovers" over it, a bubble with a word definition may appear, or a web page on a related subject may load, or a video clip may run, or an application may open. The prefix hyper ("over" or "beyond") signifies the overcoming of the old linear constraints of written text. Types and uses of hypertext Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CD. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Hypertext can develop very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web.
11.4 Hypermedia
Hypermedia is used as a logical extension of the term hypertext, in which graphics, audio, video, plain text and hyperlinks intertwine to create a generally nonlinear medium of information. This contrasts with the broader term multimedia, which may be used to describe non-interactive linear presentations as well as hypermedia. Hypermedia should not be confused with hypergraphics or super-writing which is not a related subject. The World Wide Web is a classic example of hypermedia, whereas a noninteractive cinema presentation is an example of standard multimedia due to the absence of hyperlinks. Most modern hypermedia is delivered via electronic pages from a variety of systems. Audio hypermedia is emerging with voice command devices and voice browsing.
91
actual content. In the case of hypertext and hypermedia, a graphical structure is possible in a document which may simplify the writing and reading processes. Check Your Progress 1 Distinguish hypertext and hypermedia. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
MIFF, SGML
Document Edit
Script/X, HyTime
Script Compose Presentation Start ??? Layout Print Postscript format format Document Systems Interactive Hyper-/ Multimedia Systems Processable Form Final Form
Problem Description
Multimedia Systems- M.Sc(IT)
92
The following figure shows an example of such a link. The arrows point to such a relation between the information units (Logical Data Units - LDUs). In a text (top left in the figure), a reference to the landing properties of aircrafts is given. These properties are demonstrated through a video sequence (bottom left in the figure). At another place in the text, sales of landing rights for the whole USA are shown (this is visualized in the form of a map, using graphics- bottom right in the figure). Further information about the airlines with their landing rights can be made visible graphically through a selection of a particular city. A special information about the number of the different airplanes sold with landing rights in Washington is shown at the top right in the figure with a bar diagram. Internally, the diagram information is presented in table form. The left bar points to the plane, which can be demonstrated with a video clip.
Hypertext Data. An example of linking information of different media Multimedia Systems- M.Sc(IT)
93 Hypertext System: A hypertext system is mainly determined through non-linear links of information. Pointers connect the nodes. The data of different nodes can be represented with one or several media types. In a pure text system, only text parts are connected. We understand hypertext as an information object which includes links to several media. Multimedia System: A multimedia system contains information which is coded at least in a continuous and discrete medium. For example, if only links to text data are present, then this is not a multimedia system, it is a hypertext. A video conference, with simultaneous transmission of text and graphics, generated by a document processing program, is a multimedia application. Although it does not have any relation to hypertext and hypermedia. Hypermedia System: As the above figure shows, a hypermedia system includes the non-linear information links of hypertext systems and the continuous and media of multimedia systems. For example, if a non-linear link consists of text and video data, then this is a hypermedia, multimedia and hypertext system.
94 After the release of web browsers for both the PC and Macintosh environments, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. Thus, all earlier hypertext systems were overshadowed by the success of the web, even though it originally lacked many features of those earlier systems, such as an easy way to edit what you were reading, typed links, backlinks,
11.11 References
1. "Multimedia: Concepts and Practice" By Stephen McGloughlin 2. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.
Multimedia Systems- M.Sc(IT)
95
12.5 Let us sum up 12.6 Lesson-end activities 12.7 Model answers to Check your progress 12.8 References
12.1 Introduction
Exchanging documents entails exchanging the document content as well as the document structure. This requires that both documents have the same document architecture. The current standards in the document architecture are 1. Standard Generalized Markup Language 2. Open Document Architecture
96 groups agree on the meaning of the tags. SGML makes a frame available with which the user specifies the syntax description in an object-specific system. Here, classes and objects, hierarchies of classes and objects, inheritance and the link to methods (processing instructions) can be used by the specification. SGML specifies the syntax, but not the semantics. For example, <title>Multimedia-Systems</title> <author>Felix Gatou</author> <side>IBM</side> <summary>This exceptional paper from Peter This example shows an application of SGML in a text document. The following figure shows the processing of an SGML document. It is divided into two processes: Only the formatter knows the meaning of the tag and it transforms the document into a formatted document. The parser uses the tags, occurring in the document, in combination with the corresponding document type. Specification of the document SGML : Document processing from the information to the presentation
Multimedia Systems- M.Sc(IT)
97 structure is done with tags. Here, parts of the layout are linked together. This is based on the joint context between the originator of the document and the formatter process. It is one defined through SGML.
12.2.1 SGML andMultimedia Multimedia data are supported in the SGML standard only in the form of graphics. A graphical image as a CGM (Computer Graphics Metafile) is embedded in an SGML document. The standard does not refer to other media :
<!ATTLIST <!ATTLIST <!ELEMENT <!ELEMENT <!ELEMENT .. <!ELEMENT video id ID #IMPLIED> video synch synch #MPLIED> video (audio, movpic)> audio (#NDATA)> -- not-text media movpic (#NDATA)> -- not-text media story (preamble, body, postamble)> :
A link to concrete data can be specified through #NDATA. The data are stored mostly externally in a separate file. The above example shows the definition of video which consists of audio and motion pictures. Multimedia information units must be presented properly. The synchronization between the components is very important here.
98 A content architecture describes for each medium: (1) the specification of the elements, (2) the possible access functions and, (3) the data coding. Individual elements are the Logical Data Units (LDUs), which are determined for each medium. The access functions serve for the manipulation of individual elements. The coding of the data determines the mapping with respect to bits and bytes. ODA has content architectures for media text, geometrical graphics and raster graphics. Contents of the medium text are defined through the Character Content Architecture. The Geometric Graphics Content Architecture allows a content description of still images. It also takes into account individual graphical objects. Pixel-oriented still images are described through Raster Graphics Content Architecture. It can be a bitmap as well as a facsimile. 12.3.2 Layout structure and Logical Structure The Structure and presentation models describe-according to the information
architecture-the cooperation of information units. These kinds of meta information distinguish layout and logical structure. The layout structure specifies mainly the representation of a document. It is related to a two dimensional representation with respect to a screen or paper. The presentation model is a tree. Using frames the position and size of individual layout elements is established. For example, the page size and type style are also determined. The logical structure includes the partitioning of the content. Here, paragraphs and individual heading are specified according to the tree structure. Lists with their entries are defined as: ODA : Content layout and logical view
Multimedia Systems- M.Sc(IT)
99
Paper = preamble body postamble Body = heading paragraphpicture Chapter2 = heading paragraph picture paragraph
The above example describes the logical structure of an article. Each article consists of a preamble, a body and a postamble. The body includes two chapters, both of them start with headings. Content is assigned to each element of this logical structure. The Information architecture ODA includes the cooperative models shown in the following figure. The fundamental descriptive means of the structural and presentational models are linked to the individual nodes which build a document. The document is seen as a tree. Each node (also a document) is a constituent, or an object. It consists of a set of attributes, which represent the properties of the nodes. A node itself includes a concrete value or it defines relations between other nodes. Hereby, relations and operators, as shown in following table, are allowed. Sequence All child nodes are ordered sequentially Aggregate No ordering among the child nodes Choice One of the child nodes has a successor Optional One or no(operator Repeat One.any times (operator) Optional Repeat 0any time (operator) The simplified distinction is between the edition, formatting (Document Layout Process and Content Layout Process) and actual presentation (Imaging Process). Current WYSIWYG (What You See Is What You Get) editors include these in one single step. It is important to mention that the processing assumes a liner reproduction. Therefore, this is only partially suitable as document architecture for a hypertext system. Hence, work is occurring on Hyper-ODA. A formatted document includes the specific layout structure, and eventually the generic layout structure. It can be printed directly or displayed, but it cannot be changed. A processable document consists of the specific logical structure, eventually the generic logical structure, and later of the generic layout structure. The document cannot be printed directly or displayed. Change of content is possible. A formatted processable document is a mixed form. It can be printed, displayed and the content can be changed. For the communication of an ODA document, the representation model, show in the following Figure is used. This can be either the Open Document Interchange Format
(ODIF) (based on ASN.1), or the Open Document Language (ODL) (based on SGML).
Multimedia Systems- M.Sc(IT)
100 The manipulation model in ODA, shown in the above figure, makes use of Document Application Profiles (DAPs). These profiles are an ODA (Text Only, Text + Raster Graphics + Geometric Graphics, Advanced Level). Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. ODA Information Architecture with structure, content, presentation and representation model
Multimedia Systems- M.Sc(IT)
101 12.3.3 ODA andMultimedia Multimedia requires, besides spatial representational dimensions, the time as a main part of a document. If ODA should include continuous media, further extensions in the standard are necessary. Currently, multimedia is not part of the standard. All further paragraphs discuss only possible extensions, which formally may or may not be included in ODA in this form. Contents The content portions will change to timed content portions. Hereby, the duration does not have to be specified a priori. These types of content portions are called Open Timed Content Portions. Let us consider an example of an animation which is generated during the presentation time depending on external events. The information, which can be included during the presentation time, is images taken from the camera. In the case of a Closed Timed Content Portion, the duration is fixed. A suitable example is a song. Structure Operations between objects must be extended with a time dimension where the time relation is specified in the farther node. Content Architecture Additional content architectures for audio and video must be defined. Hereby, the corresponding elements, LDUs, must be specified. For the access functions, a set of generally valid functions for the control of the media streams needs to be specified. Such functions are, for example, Start and Stop. Many functions are very often devicedependent. One of the most important aspects is a compatibility provision among different systems implementing ODA. Logical Structures Extensions for multimedia of the logical structure also need to be considered. For example, a film can include a logical structure. It could be a tree with the following components: 1. Prelude
Introductory movie segment Participating actors in the second movie segment 2. Scene 1 3. Scene 2 4. 5. Postlude Such a structure would often be desirable for the user. This would allow one to deterministically skip some areas and to show or play other areas.
Multimedia Systems- M.Sc(IT)
102 Layout Structure The layout structure needs extensions for multimedia. The time relation by a motion picture and audio must be included. Further, questions such as When will something be played?, From which point? And With which attributes and dependencies? must be answered. The spatial relation can specify, for example, relative and absolute positions by the audio object. Additionally, the volume and all other attributes and dependencies should be determined.
12.4 MPEG
The committee ISO/IEC JTCI/SC29 (Coding of Audio, Picture, Multimedia and Hypermedia Information) works on the standardization of the exchange format for multimedia systems. The actual standards are developed at the international level in three working groups cooperating with the research and industry. The following figure shows that the three standards deal with the coding and compressing of individual media. The results of the working groups: the Joint Photographic Expert Group (JPEG) and the Motion Picture Expert Group (MPEG) are of special importance in the area of multimedia systems. In a multimedia presentation, the contents, in the form of individual information objects, are described with the help of the above named standards. The structure (e.g., processing in time) is specified first through timely spatial relations between the information objects. The standard of this structure description is the subject of the working group WG12, which is known as the Multimedia and Hypermedia Information Coding Expert Group (MHEG). ISO/IEC JTC1/SC29 Coding of Audio, Picture, Multimedia and Hypermedia Information
WG 1
Coding of Still Pictures (JBIG/JPEG)
WG 11
Coding of Moving Pictures and Associated Audio (MPEG)
WG 12
Coding of Multimedia and Hypermedia Information (MHEG)
Content Structure
103 The name of the developed standard is officially called Information TechnologyCoding of Multimedia and Hypermedia Information (MHEG). The final MHEG standard will be described in three documents. The first part will discuss the concepts, as well as the exchange format. The second part describes an alternative, semantically to the first part, isomorph syntax of the exchange format. The third part should present a reference architecture for a linkage to the script languages. The main concepts are covered in the first document, and the last two documents are still in progress; therefore, we will focus on the first document with the basic concepts. Further discussions about MHEG are based mainly on the committee draft version, because: (1) all related experiences have been gained on this basis; (2) the basic concepts between the final standard and this committed draft remain to be the same; and, (3) the finalization of this standard is still in progress. 12.4.1 Example of an Interactive Multimedia Presentation Before a detailed description of the MHEG objects is given, we will briefly examine the individual elements of a presentation using a small scenario. The following figure presents a time diagram of an interactive multimedia presentation. The presentation starts with some music. As soon as the voice of a news-speaker is heard in the audio sequence, a graphic should appear on the screen for a couple of seconds. After the graphic disappears, the viewer carefully reads a text. After the text presentation ends, a Stop button appears on the screen. With this button the user can abort the audio sequence. Now, using a displayed input field, the user enters the title of a desired video sequence. These video data are displayed immediately after the modification. Start Start Audio Text Image Video Media Selection Stop Modification Start Time Start of Presentation Figure showing Time diagram of an interactive presentation
Multimedia Systems- M.Sc(IT)
104 Content A presentation consists of a sequence of information representations. For the representation of this information, media with very different properties are available. Because of later reuse, it is useful to capture each information LDU as an individual object. The contents in our example are: the video sequence, the audio sequence, the
graphics and the text. Behavior The notion behavior means all information which specifies the representation of the contents as well as defines the run of the presentation. The first part is controlled by the actions start, set volume, set position, etc. The last part is generated by the definition of timely, spatial and conditional links between individual elements. If the state of the contents presentation changes, then this may result in further commands on other objects (e.g., the deletion of the graphic causes the display of the text). Another possibility, how the behavior of a presentation can be determined, is when external programs or functions (script) are called. User Interaction In the discussed scenario, the running animation could be aborted by a corresponding user interaction. There can be two kinds of user interactions. The first one is the simple selection, which controls the run of the presentation through a pre specified choice (e.g., push the Stop button). The second kind is the more complex modification, which gives the user the possibility to enter data during the run of the presentation (e.g., editing of a data input field). Merging together several elements as discussed above, a presentation, which progresses in time, can be achieved. To be able to exchange this presentation between the involved systems, a composite element is necessary. This element is comparable to a container. It links together all the objects into a unit. With respect to hypertext/hypermedia documents, such containers can be ordered to a complex structure, if they are linked together through so-called hypertext pointers. 12.4.2 Derivation of a Class Hierarchy The following figure summarizes the individual elements in the MHEG class hierarchy in the form of a tree. Instances can be created from all leaves (roman printed classes). All internal nodes, including the root (italic printed classes), are abstract classes, i.e., no instances can be generated from them. The leaves inherit some attributes from the root of the tree as an abstract basic class. The internal nodes do not include any further functions. Their task is to unify individual classes into meaningful groups. The action, the link and the script classes are grouped under the behavior class, which defines the behavior in a presentation. The interaction class includes the user interaction, which is again modeled through the selection and modification class. All the classes together with the content and
Multimedia Systems- M.Sc(IT)
105 composite classes specify the individual components in the presentation and determine the component class. Some properties of the particular MHEG engine can be queried by the descriptor class. The macro class serves as the simplification of the access, respectively reuse of objects. Both classes play a minor role; therefore, they will not be discussed further. The development of the MHEG standard uses the techniques of object-oriented design. Although a class hierarchy is considered a kernel of this technique, a closer look shows that the MHEG class hierarchy does not have the meaning it is often assigned. MH-Object-Class The abstract MH-Object-Class inherits both data structures MHEG Identifier and Descriptor. MHEG Identifier consists of the attributes NHEG Identifier and Object Number
and it serves as the addressing of MHEG objects. The first attribute identifies a specific application. The Object Number is a number which is defined only within the application. The data structure Descriptor provides the possibility to characterize more precisely each MHEG object through a number of optional attributes. For example, this can become meaningful if a presentation is decomposed into individual objects and the individual MHEG objects are stored in a database. Any author, supported by proper search function, can reuse existing MHEG objects.
Multimedia Systems- M.Sc(IT)
106
12.8 References
1. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 2. Multimedia Making it work By Tay Vaughan 3. www.w3.org/MarkUp/SGML/ 4. http:\en.wikipedia.org/wiki/SGML
Multimedia Systems- M.Sc(IT)
107
13.4 Image editing tools 13.5 Painting and drawing tools 13.6 Sound editing tools 13.7 Animation, Video and Digital Movie Tools 13.8 Let us sum up 13.9 Lesson-end activities 13.10 Model answers to Check your progress 13.11 References
13.1 Introduction
The basic tools set for building multimedia project contains one or more authoring systems and various editing applications for text, images, sound, and motion video. A few additional applications are also useful for capturing images from the screen, translating file formats and tools for the making multimedia production easier.
108
cameras, clip art files, or original artwork files created with a painting or drawing package. Here are some features typical of image-editing applications and of interest to multimedia developers: Multiple windows that provide views of more than one image at a time Conversion of major image-data types and industry-standard file formats Direct inputs of images from scanner and video sources Employment of a virtual memory scheme that uses hard disk space as RAM for images that require large amounts of memory Capable selection tools, such as rectangles, lassos, and magic wands, to select portions of a bitmap Image and balance controls for brightness, contrast, and color balance Good masking features Multiple undo and restore features Anti-aliasing capability, and sharpening and smoothing controls Color-mapping controls for precise adjustment of color balance Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and tinting Geometric transformation such as flip, skew, rotate, and distort, and perspective changes Ability to resample and resize an image
Multimedia Systems- M.Sc(IT)
109 134-bit color, 8- or 4-bit indexed color, 8-bit gray-scale, black-and-white, and customizable color palettes Ability to create images from scratch, using line, rectangle, square, circle, ellipse, polygon, airbrush, paintbrush, pencil, and eraser tools, with customizable brush shapes and user-definable bucket and gradient fills Multiple typefaces, styles, and sizes, and type manipulation and masking routines Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco, graphic pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl, watercolor, wave, and wind Support for third-party special effect plug-ins Ability to design in layers that can be combined, hidden, and reordered Plug-Ins Image-editing programs usually support powerful plug-in modules available from third-party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise filter your images for special visual effects. Check Your Progress 1 List a few image editing features that an image editing tool should possess. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
110 Some software applications combine drawing and painting capabilities, but many authoring systems can import only bitmapped images. Typically, bitmapped images provide the greatest choice and power to the artist for rendering fine detail and effects, and today bitmaps are used in multimedia more often than drawn objects. Some vectorbased packages such as Macromedias Flash are aimed at reducing file download times on the Web, and may contain both bitmaps and drawn art. The anti-aliased character shown in the bitmap of Color Plate 5 is an example of the fine touches that improve the look of an image. Look for these features in a drawing or painting packages: An intuitive graphical user interface with pull-down menus, status bars, palette control, and dialog boxes for quick, logical selection Scalable dimensions, so you can resize, stretch, and distort both large and small bitmaps Paint tools to create geometric shapes, from squares to circles and from curves to complex polygons Ability to pour a color, pattern, or gradient into any area Ability to paint with patterns and clip art Customizable pen and brush shapes and sizes Eyedropper tool that samples colors Auto trace tool that turns bitmap shapes into vector-based outlines Support for scalable text fonts and drop shadows Multiple undo capabilities, to let you try again Painting features such as smoothing coarse-edged objects into the background with anti-aliasing, airbrushing in variable sizes, shapes, densities, and patterns; washing colors in gradients; blending; and masking Support for third-party special effect plug-ins Object and layering capabilities that allow you to treat separate elements independently Zooming, for magnified pixel editing All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and grayscale Good color management and dithering capability among color depths using various color models such as RGB, HSB, and CMYK
Good palette management when in 8-bit mode Good file importing and exporting capability for image formats such as PIC, GIF, TGA, TIF, WMF, JPG, PCX, EPS, PTN, and BMP
111 System sounds are shipped both Macintosh and Windows systems and they are available as soon the Operating system is installed. For MIDI sound, a MIDI synthesizer is required to play and record sounds from musical instruments. For ordinary sound there are varieties of software such as Soundedit, MP3cutter, Wavestudio.
112 A frame consists of a series of lines, known as scan lines. Scan lines have a regular and consistent length in order to produce a rectangular image. This is because in analog formats, a line lasts for a given period of time; in digital formats, the line consists of a given number of pixels. When a device sends a frame, the video format specifies that devices send each line independently from any others and that all lines are sent in top-to-bottom order. As above, a frame may be split into fields odd and even (by line "numbers") or upper and lower, respectively. In NTSC, the lower field comes first, then the upper field, and that's the whole frame. The basics of a format are Aspect Ratio, Frame Rate, and Interlacing with field order if applicable: Video formats use a sequence of frames in a specified order. In some formats, a single frame is independent of any other (such as those used in computer video formats), so the sequence is only one frame. In other video formats, frames have an ordered position. Individual frames within a sequence typically have similar construction. However, depending on its position in the sequence, frames may vary small elements within them to represent additional information. For example, MPEG-13 compression may eliminate the information that is redundant frame-to-frame in order to reduce the data size, preserving the information relating to changes between frames. Analog video formats NTSC PAL SECAM Digital Video Formats These are MPEG13 based terrestrial broadcast video formats ATSC Standards DVB ISDB These are strictly the format of the video itself, and not for the modulation used for transmission.
113 Digital broadcast Interlaced: SDTV (480i, 576i) HDTV (1080i) Progressive: LDTV (1340p, 1388p, 1seg) EDTV (480p, 576p) HDTV (7130p, 1080p) Digital TV standards (MPEG-13):ATSC, DVB, ISDB, DMB-T/H Digital TV standards (MPEG-4 AVC):DMB-T/H,DVB,SBTVD,ISDB (1seg) Multichannel audio: AAC (5.1) Musicam PCM LPCM
Digital cinema: UHDV (13540p, 43130p) DCI 13.7.3 QuickTime QuickTime is a multimedia framework developed by Apple Inc. capable of handling various formats of digital video, media clips, sound, text, animation, music, and several types of interactive panoramic images. Available for Classic Mac OS, Mac OS X and Microsoft Windows operating systems, it provides essential support for software packages including iTunes, QuickTime Player (which can also serve as a helper application for web browsers to play media files that might otherwise fail to open) and Safari. The QuickTime technology consists of the following: 1. The QuickTime Player application created by Apple, which is a media player. 2. The QuickTime framework, which provides a common set of APIs for encoding and decoding audio and video. 3. The QuickTime Movie (.mov) file format, an openly-documented media container. QuickTime is integral to Mac OS X, as it was with earlier versions of Mac OS. All Apple systems ship with QuickTime already installed, as it represents the core media framework for Mac OS X. QuickTime is optional for Windows systems, although many software applications require it. Apple bundles it with each iTunes for Windows download, but it is also available as a stand-alone installation. QuickTime players QuickTime is distributed free of charge, and includes the QuickTime Player application. Some other free player applications that rely on the QuickTime framework provide features not available in the basic QuickTime Player. For example: iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless. Multimedia Systems- M.Sc(IT) 114 In Mac OS X, a simple AppleScript can be used to play a movie in full-screen mode. However, since version 7.13 the QuickTime Player now also supports for full screen viewing in the non-pro version. QuickTime framework The QuickTime framework provides the following: Encoding and transcoding video and audio from one format to another. Decoding video and audio, and then sending the decoded stream to the graphics or audio subsystem for playback. In Mac OS X, QuickTime sends video playback to the Quartz Extreme (OpenGL) Compositor. A plug-in architecture for supporting additional codecs (such as DivX). The framework supports the following file types and codecs natively:
Audio
Apple Lossless Audio Interchange (AIFF) Digital Audio: Audio CD - 16-bit (CDDA), 134-bit, 313-bit integer & floating point, and 64-bit floating point MIDI MPEG-1 Layer 3 Audio (.mp3) MPEG-4 AAC Audio (.m4a, .m4b, .m4p)
Video
3GPP & 3GPP13 file formats AVI file format Bitmap (BMP) codec and file format DV file (DV NTSC/PAL and DVC Pro NTSC/PAL codecs) Flash & FlashPix files GIF and Animated GIF files H.1361, H.1363, and H.1364 codecs JPEG, Photo JPEG, and JPEG-13000 codecs and file formats MPEG-1, MPEG-13, and MPEG-4 Video file formats and associated codecs (such as AVC) QuickTime Movie (.mov) and QTVR movies Other video codecs: Apple Video, Cinepak, Component Video, Graphics, and Planar RGB Other still image formats: PNG, TIFF, and TGA
Multimedia Systems- M.Sc(IT)
.qt
MIME type: video/quicktime Type code: MooV Uniform Type Identifier: com.apple.quicktime-movie Developed by: Apple Inc. Type of format: Media container Container for: Audio, video, text
The QuickTime (.mov) file format functions as a multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (for subtitles, for example). Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional QuickTime Extensions, it can also support Ogg, ASF, FLV, MKV, DivX Media Format, and others. Check Your Progress 2 List the different file formats supported by Quicktime Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
In this lesson we have learnt the software that can be used for creating and editing a multimedia content. We have touched upon the following software tools i) Text editing tools such as Microsoft Word, WordPerfect ii) Image Editing tools and their features iii) Sound editing tools and introduction to MIDI iv) Video and their file formats v) Quicktime Video format
Multimedia Systems- M.Sc(IT)
116
13.11 References
1. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 2. Multimedia Making it work By Tay Vaughan 3. Multimedia in Practice Technology and applications By Jeffcoat
4. www.apple.com/quicktime/ 5. http://en.wikipedia.org/wiki/QuickTime
Multimedia Systems- M.Sc(IT)
117
14.1 Introduction
All the multimedia components are created in different tools. It is necessary to link all the multimedia objects such as such as audio, video, text, images and animation in order to make a complete multimedia presentation. There are different tools that provide mechanisms for linking the multimedia components.
14.2 OLE
Object Linking and Embedding (OLE) is a technology that allows embedding and linking to documents and other objects. OLE was developed by Microsoft. It is found on the Component Object Model. For developers, it brought OLE custom controls (OCX), a way to develop and use custom user interface elements. On a technical level, an
Multimedia Systems- M.Sc(IT)
118 OLE object is any object that implements the IOleObject interface, possibly along with a wide range of other interfaces, depending on the object's needs. Overviewhttp://en.wikipedia.org/wiki/Image:OLE-demonstratie__een_grafiek_in_een_tekstdocument.png of OLE http://en.wikipedia.org/wiki/Image:OLE-demonstratie__een_grafiek_in_een_tekstdocument.pngOLE allows an editor to farm out part of a
document to another editor and then re-imports it. For example, a desktop publishing system might send some text to a word processor or a picture to a bitmap editor using OLE. The main benefit of using OLE is to display visualizations of data from other programs that the host program is not normally able to generate itself (e.g. a pie-chart in a text document), as well to create a master file. References to data in this file can be made and the master file can then have changed data which will then take effect in the referenced document. This is called "linking" (instead of "embedding"). Its primary use is for managing compound documents, but it is also used for transferring data between different applications using drag and drop and clipboard operations. The concept of "embedding" is also central to much use of multimedia in Web pages, which tend to embed video, animation (including Flash animations), and audio files within the hypertext markup language (such as HTML or XHTML) or other structural markup language used (such as XML or SGML) possibly, but not necessarily, using a different embedding mechanism than OLE. History of OLE OLE 1.0 OLE 1.0, released in 1990, was the evolution of the original dynamic data exchange, or DDE, concepts that Microsoft developed for earlier versions of Windows. While DDE was limited to transferring limited amounts of data between two running applications, OLE was capable of maintaining active links between two documents or even embedding one type of document within another. OLE servers and clients communicate with system libraries using virtual function tables, or VTBLs. The VTBL consists of a structure of function pointers that the system library can use to communicate with the server or client. The server and client libraries, OLESVR.DLL and OLECLI.DLL, were originally designed to communicate between themselves using the WM_DDE_EXECUTE message. OLE 1.0 later evolved to become architecture for software components known as the Component Object Model (COM), and later DCOM. When an OLE object is placed on the clipboard or embedded in a document, both a visual representation in native Windows formats (such as a bitmap or metafile) is stored, as well as the underlying data in its own format.
Multimedia Systems- M.Sc(IT)
119 This allows applications to display the object without loading the application used to create the object, while also allowing the object to be edited, if the appropriate application is installed. For example, if an OpenOffice.org Writer object is embedded, both a visual representation as an Enhanced Metafile is stored, as well as the actual text of the document in the Open Document Format. OLE 14.0 OLE 14.0 was the next evolution of OLE 1.0, sharing many of the same goals, but was re-implemented over top of the Component Object Model instead of using VTBLs. New features were automation, drag-and-drop, in-place activation and structured storage. Technical details of OLE OLE objects and containers are implemented on top of the Component Object Model; they are objects that can implement interfaces to export their
functionality. Only the IOleObject interface is compulsory, but other interfaces may need to be implemented as well if the functionality exported by those interfaces is required. To ease understanding of what follows, a bit of terminology has to be explained. The view status of an object is whether the object is transparent, opaque, or opaque with a solid background, and whether it supports drawing with a specified aspect. The site of an object is an object representing the location of the object in its container. A container supports a site object for every object contained. An undo unit is an action that can be undone by the user, with Ctrl-Z or using the "Undo" command in the "Edit" menu.
14.3 DDE
Dynamic Data Exchange (DDE) is a technology for communication between multiple applications under Microsoft Windows and OS/14. Dynamic Data Exchange was first introduced in 1987 with the release of Windows 14.0. It utilized the "Windows Messaging Layer" functionality within Windows. This is the same system used by the "copy and paste" functionality. Therefore, DDE continues to work even in modern versions of Windows. Newer technology has been developed that has, to some extent, overshadowed DDE (e.g. OLE, COM, and OLE Automation), however, it is still used in several places inside Windows, e.g. for Shell file associations. The primary function of DDE is to allow Windows applications to share data. For example, a cell in Microsoft Excel could be linked to a value in another application and when the value changed, it would be automatically updated in the Excel spreadsheet. The data communication was established by a simple, three-segment model. Each program
Multimedia Systems- M.Sc(IT)
120 was known to DDE by its "application" name. Each application could further organize information by groups known as "topic" and each topic could serve up individual pieces of data as an "item". For example, if a user wanted to pull a value from Microsoft Excel which was contained in a spreadsheet called "Book1.xls" in the cell in the first row and first column, the application would be "Excel", the topic "Book1.xls" and the item "r1c1". A common use of DDE was for custom developed applications to control off-theshelf software, e.g. a custom in-house application written in C or some other language might use DDE to open a Microsoft Excel spreadsheet and fill it with data, by opening a DDE conversation with Excel and sending it DDE commands. Today, however, one could also use the Excel object model with OLE Automation (part of COM). While newer technologies like COM offer features DDE doesn't have, there are also issues with regard to configuration that can make COM more difficult to use than DDE. NetDDE A California-based company called Wonderware developed an extension for DDE called NetDDE that could be used to initiate and maintain the network connections needed for DDE conversations between DDE-aware applications running on different computers in a network and transparently exchange data. A DDE conversation is the interaction between client and server applications. NetDDE could be used along with DDE and the DDE management library (DDEML) in applications. /Windows/SYSTEM314
DDESHARE.EXE (DDE Share Manager) NDDEAPIR.EXE (NDDEAPI Server Side) NDDENB314.DLL (Network DDE NetBIOS Interface) NETDDE.EXE (Network DDE - DDE Communication) Microsoft licensed a basic (NetBIOS Frames protocol only) version of the product for inclusion in various versions of Windows from Windows for Workgroups to Windows XP. In addition, Wonderware also sold an enhanced version of NetDDE to their own customers that included support for TCP/IP.. Basic Windows applications using NetDDE are Clipbook Viewer, WinChat and Microsoft Hearts. NetDDE was still included with Windows Server 14003 and Windows XP Service Pack 14, although it was disabled by default. It has been removed entirely in Windows Vista. However, this will not prevent existing versions of NetDDE from being installed and functioning on later versions ofWindows.
121 interface and usually can interact with each other, sometimes in ways that the operating system would not normally allow. 14.4.1 Components of Office Suite Most office application suites include at least a word processor and a spreadsheet element. In addition to these, the suite may contain a presentation program, database tool, graphics suite and communications tools. An office suite may also include an email client and a personal information manager or groupware package. 14.4.2 Currently available Office Suites The currently dominant office suite is Microsoft Office, which is available for Microsoft Windows and the Apple Macintosh. It has become a proprietary de-facto standard in office software. An alternative is any of the OpenDocument suites, which use the free OpenDocument file format, defined by ISO/IEC 146300. The most prominent of these is OpenOffice.org, open-source software that is available for Windows, Linux, Macintosh, and other platforms. OpenOffice.org, KOffice and Kingsoft Office support many of the features of Microsoft Office, as well as most of its file formats, and has spawned several derivatives such as NeoOffice, a port for Mac OS X that integrates into its Aqua interface, and StarOffice, a commercial version by Sun Microsystems. A new category of "online word processors" allows editing of centrally stored documents using a web browser. 14.4.3 Comparison of office suites The following table compares general and technical information for a number of office suites. Please see the individual products' articles for further information. The table only includes systems that are widely used and currently available. Product Name Developer First public release Operating system Ability Office Ability Plus Software
1985 Windows AppleWorks Apple 1991 Mac OS, Mac OS X and Windows Corel Office Corel 1991 Windows GNOME Office GNOME Foundation AbiSource ? Cross-platform Google Apps Google Docs Google 2006 Cross-platform IBM Lotus Symphony IBM 2007 Windows and Linux iWork Apple 2005 Mac OS X KOffice KDE Project 1998 BSD, Linux, Solaris Lotus SmartSuite IBM 1991 Windows and OS/14
Multimedia Systems- M.Sc(IT)
122 MarinerPak Mariner Software 1996 Mac OS and Mac OS X Microsoft Office Microsoft 1990 (Office 1, for Macintosh) 1991 (Office 3, first Windows version) Windows and Macintosh OpenOffice.org OpenOffice.org Organization October 14, 2001 Cross-platform ShareOffice ShareMethods May 14,2007 Cross-platform SoftMaker Office SoftMaker 1989 FreeBSD, Linux, PocketPC and Windows StarOffice Sun Microsystems 1995 Cross-platform StarOffice from Google Pack Sun Microsystems 1995 Cross-platform Check Your Progress 1 List a few available office suites. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Presentation is the process of presenting the content of a topic to an audience. A presentation program is a computer software package used to display information, normally in the form of a slide show. It typically includes three major functions: an editor that allows text to be inserted and formatted, a method for inserting and manipulating graphic images and a slide-show system to display the content. There are many different types of presentations including professional (work-related), education, worship and for general communication. A presentation program, such as OpenOffice.org Impress, Apple Keynote, iWare CD Technologies' PMOS or Microsoft PowerPoint, is often used to generate the presentation content.
Multimedia Systems- M.Sc(IT)
123 The most commonly known presentation program is Microsoft PowerPoint, although there are alternatives such as OpenOffice.org Impress and Apple's Keynote. In general, the presentation follows a hierarchical tree explored linearly (like in a table of content) which has the advantage to follow a printed text often given to participants. Adobe Acrobat is also a popular tool for presentation which can be used to easily link other presentations of whatever kind and by adding the faculty of zooming without loss of accuracy due to vector graphics inherent to PostScript and PDF. 14.5.1 KEYNOTE : Keynote is a presentation software application developed by Apple Inc. and a part of the iWork productivity suite (which also includes Pages and Numbers) sold by Apple 14.5.2 Impress : A presentation program similar to Microsoft PowerPoint. It can export presentations to Adobe Flash (SWF) files allowing them to be played on any computer with the Flash player installed. It also includes the ability to create PDF files, and the ability to read Microsoft PowerPoint's .ppt format. Impress suffers from a lack of ready-made presentation designs. However, templates are readily available on the Internet. 14.5.3 Microsoft PowerPoint: This is a presentation program developed by Microsoft. It is part of the Microsoft Office system. Microsoft PowerPoint runs on Microsoft Windows and the Mac OS computer operating systems. It is widely used by business people, educators, students, and trainers and is among the most prevalent forms of persuasion technology. Other Presentation software includes: Adobe Persuasion AppleWorks Astound by Gold Disk Inc. Beamer (LaTeX) Google Docs which now includes presentations Harvard Graphics HyperCard IBM Lotus Freelance Graphics Macromedia Action! MagicPoint Microsoft PowerPoint
124 Worship presentation program Zoho Presentations in HTML format: Presentacular, HTML Slidy.
Multimedia Systems- M.Sc(IT)
125
14.9 References
1. Multimedia Making it work By Tay Vaughan 2. "Multimedia: Concepts and Practice" By Stephen McGloughlin
Multimedia Systems- M.Sc(IT)
126
15.3 General design issues 15.4 Effective Human Computer Interaction 15.5 Video at the user interface 15.6 Audio at the user interface 15.7 User friendliness as the primary goal 15.8 Let us sum up 15.9 Model answers to Check your progress 15.10 References
15.1 Introduction
In computer science, we understand the user interface as the interactive input and output of a computer as its is perceived and operated on by users. Multimedia user interfaces are used for making the multimedia content active. Without user interface the multimedia content is considered to be linear or passive.
127
Relational Structures This group of characteristics refers to the way in which a relation maps among its domain sets (dependency). There are functional dependencies and non-functional dependencies. An example of a relational structure which expresses functional dependency is a bar chart. An example of a relational structure which expresses nonfunctional dependency is a student entry in a relational database. Multi-domain Relations Relations can be considered across multiple domains, such as: (1) multiple attributes of a single object set (e.g., positions, colors, shapes, and/or sizes of a set of objects in a chart); (2) multiple object sets (e.g., a cluster of text and graphical symbols on a map); and (3) multiple displays. Large Data Sets Large data sets refer to numerous attributes of collections of heterogeneous objects (e.g., presentations of semantic networks, databases with numerous object types and attributes of technical documents for large systems, etc.).
Multimedia Systems- M.Sc(IT)
128 Check Your Progress 1 List a few information characters required for presentation. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 15.3.2 Presentation Function Presentation function is a program which displays an object. It is important to specify the presentation function independent from presentation form, style or the information it conveys. Several approaches consider the presentation function from different points of view 15.3.3 Presentation Design Knowledge To design a presentation, issues like content selection, media and presentation technique selection and presentation coordinating must be considered. Control selection is the key to convey the information to the user. However, we are not free in the selection of it because content can be influenced by constraints imposed by the size and complexity of the presentation. Media selection determines partly the information characteristics. For selecting presentation techniques, rule can be used. For example, rules for selection methods, i.e., for supporting a users ability to locate on the facts in a presentation, may specify a preference for graphical techniques. Coordination can be viewed as a process of composition. Coordination needs mechanisms such as (1) encoding techniques (2) presentation objects that represent facts (3) multiple displays. Coordination of multimedia employs a set of composition operators for merging, aligning and synthesizing different objects to construct displays that convey
129 respect to other human-computer interfaces; (4) interactive capabilities; and (5) separability of the user interface from the application.
During audio output, the additional presentation dimension of space can be introduced using two or more separate channels to give a more natural distribution of sound. The best-known example of this technique is stereo. In the case of monophony, all audio sources have the same spatial location. A listener can only properly understand the loudest audio signal. The same effect can be simulated by closing one ear. Stereophony allows listeners with bilateral hearing
Multimedia Systems- M.Sc(IT)
130 capabilities to hear lower intensity sounds. It is important to mention that the main advantage of bilateral hearing is not the spatial localization of audio sources, but the extraction of less intensive signals in a loud environment.
15.7.2 Context-sensitive Help Functions A context-sensitive help function using hypermedia techniques is very helpful, I e., according to the state of the application, different help-texts are displayed. 15.7.3 Easy to Remember Instructions A user-friendly interface must also have the property that the user easily remembers the application instruction rules. Easily remembered instructions might be supported by the intuitive association to what the user already knows. 15.7.4 Effective Instructions The user interface should enable effective use of the application. This means: Logically connected functions should be presented together and similarly. Graphically symbols or short clips are more effective than textual input and output. They trigger faster recognition. Different media should be able to be exchanged among different applications. Actions should be activated quickly. A configuration of a user interface should be usable by both professional and sporadic users. 15.7.5 Aesthetics With respect to aesthetics, the color combination, character sets, resolution and form of the window need to be considered. They determine a users first and lasting impressions.
Multimedia Systems- M.Sc(IT)
131 15.7.6 Entry elements User interfaces use different ways to specify entries for the user: Entries in a menu In menus there are visible and non-visible entry elements. Entries which are relevant task are to be made available for east menu selection Entries on a graphical interface If the interface includes text, the entries can be marked through color and/or different font If the interface includes images, the entries can be written over the image. 15.7.7 Presentation The presentation, i.e., the optical image at the user interface, can have the following variants: Full text Abbreviated text Icons, i.e., graphics Micons, i.e., motion video 15.7.8 Dialogue Boxes Different dialogue boxes should have similar constructions. This requirement applies to the design of: (1) The buttons OK and Abort; (2) Joined windows; and (3) Other applications in the same window system.
15.7.9 Additional Design Criteria A few additional hints for designing a user friendly interface are The form of the cursor can change to visualize the current state of the system. For example a hour glass can be shown for a processing task in progress When time intensive tasks are performed, the progress of the task should be presented. The selected entry should be immediately highlights as work in progress before performance actually starts. The main emphasis has been on video and audio media because they represent live information. At the user interface, these media become important because they help users learn by enabling them to choose how to distribute research responsibilities among applications (e.g., on-line encyclopedias, tutors, simulations), to compose and integrate results and to share learned material with colleagues (e.g., video conferencing).
Multimedia Systems- M.Sc(IT)
132 Additionally, computer applications can effectively do less reasoning about selection of a multimedia element (e.g., text, graphics, animation or sound) since alternative media can be selected by the user. Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
15.10 References
1. Multimedia Making it work By Tay Vaughan
2. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 3. Multimedia and Virtual Reality: Designing Multisensory User Interfaces By Alistair Sutcliffe
Multimedia Systems- M.Sc(IT)
133
16.1 Introduction
The consideration of multimedia applications supports the view that local systems expand toward distributed solutions. Applications such as kiosks, multimedia mail, collaborative work systems, virtual reality applications and others require high-speed networks with a high transfer rate and communication systems with adaptive, lightweight transmission protocols on top of the networks.
Multimedia Systems- M.Sc(IT)
134 From the communication perspective, we divide the higher layers of the Multimedia Communication System (MCS) into two architectural subsystems: an
135 Group members may have different roles in the CSCW, e.g., a member of a group (if he or she is listed in the group definition), a participant of a group activity (if he or she successfully joins the conference), a conference initiator, a conference chairman, a token holder or an observer. Groups may consist of members who have homogeneous or heterogeneous or heterogeneous characteristics and requirements of their collaborative environment. Control Control during collaboration can be centralized or distributed. Centralized control means that there is a chairman (e.g., main manger) who controls the collaborative work and every group member (e.g., user agent) reports to him or her. Distributed control means that every group member has control over his/her own tasks in the collaborative
work and distributed control protocols are in place to provide consistent collaboration. Other partition parameter may include locality, and collaboration awareness. Locality partition means that a collaboration can occur either in the same place (e.g., a group meeting in an officer or conference room) or among users located in different place through tele-collaboration. Group communication systems can be further categorized into computeraugmented collaboration systems, where collaboration is emphasized, and collaborationaugmented computing systems, where the concentrations are on computing. 16.2.3 Group Communication Architecture Group communication (GC) involves the communication of multiple users in a synchronous or an asynchronous mode with centralized or distributed control. A group communication architecture consists of a support model, system model and interface model. The GC support model includes group communication agents that communicate via a multi-point multicast communication network as shown in following Figure. Group communication agents may use the following for their collaboration: Group Rendezvous Group rendezvous denotes a method which allows one to organize meetings, and to get information about the group, ongoing meetings and other static and dynamic information. Shared Applications Application sharing denotes techniques which allow one to replicate information to multiple users simultaneously. The remote users may point to interesting aspects (e.g., via tele-pointing) of the information and modify it so that all users can immediately see the updated information
Multimedia Systems- M.Sc(IT)
136 (e.g., joint editing). Shared applications mostly belong to collaborationtransparent applications. Conferencing Conferencing is a simple form of collaborative computing. This service provides the management of multiple users for communicating with each other using multiple media. Conferencing applications belong to collaboration-aware applications. The GC system model is based on a client-server model. Clients provide user interfaces for smooth interaction between group members and the system. Servers supply functions for accomplishing the group communication work, and each server specializes in its own function. Check Your Progress 1 List the collaborations used by group communication agents. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Communication (Transport) Support Group Rendezvous Application Sharing conferencing Group Communication Agent Communication (Transport) Support Group Rendezvous Application Sharing conferencing Group Communication Agent Communication (Transport) Support Group Rendezvous Application Sharing conferencing Group Multicast Communication Agent Communication Network Group communication support model
Multimedia Systems- M.Sc(IT)
137
is high network traffic because the output of the application needs to be distributed every time. Replicated Architecture In a replicated architecture, a copy of the shared application runs locally at each site. Input events to each application are distributed to all sites and each copy of the shared application is executed locally at each site. The advantages of this architecture are low network traffic, because only input events are distributed among the sites, and low response times, since all participants get their output from local copies of the application. The disadvantages are the requirement of the same execution environment for the application at each site, and the difficulty in maintaining consistency.
16.4 Conferencing
Conferencing supports collaborative computing and is also called synchronous telecollaboration. Conferencing is a management service that controls the communication among multiple users via multiple media, such as video and audio, to achieve
Multimedia Systems- M.Sc(IT)
138 simultaneous face-to-face communication. More precisely, video and audio have the following purposes in a tele-conferencing system: Video is used in technical discussions to display view-graph and to indicate how many users are still physically present at a conference. For visual support, workstations, PCs or video walls can be used. For conferences with more than three or four participants, the screen resources on a PC or workstation run out quickly, particularly if other applications, such as shared editors or drawing spaces, are used. Hence, mechanisms which quickly resize individual images should be used. Conferencing services control a conference (i.e., a collection of shared state information such as who is participating in the conference, conference name, start of the conference, policies associated with the conference, etc,) Conference control includes several functions: Establishing a conference, where the conference participants agree upon a common state, such as identity of a chairman (moderator), access rights (floor control) and audio encoding. Conference systems may perform registration, admission, and negotiation services during the conference establishment phase, but they must be flexible and allow participants to join and leave individual media sessions or the whole conference. The flexibility depends on the control model. Closing a conference. Adding new users and removing users who leave the conference. Conference states can be stored (located) either on a central machine (centalised control), where a central application acts as the repository for all information related to the conference, or in a distributed fashion.
computing area; therefore we concentrate on architectural and management issues in this area. 16.5.1 Architecture A session management architecture is built around an entity-session managerwhich separates the control from the transport. By creating a reusable session manager, which is separated from the user-interface, conference-oriented tools avoid a duplication of their effort.
Multimedia Systems- M.Sc(IT)
139 The session control architecture consists of the following components: Session Manager Session manager includes local and remote functionalities. Local functionalities may include (1) Membership control management, such as participant authentication or presentation of coordinated user interfaces; (2) Control management for shared workspace, such as floor control (3) Media control management, such as intercommunication among media agents or synchronization (4) Configuration management, such as an exchange of interrelated QoS parameters of selection of appropriate services according to QoS; and (5) Conference control management, such as an establishment, modification and a closing of a conference. Media agents Media agents are separate from the session manager and they are not responsible for decisions specific to each type of media. The modularity allows replacement of agents. Each agent performs its own control mechanism over the particular medium, such as mute, unmute, change video quality, start sending, stop sending, etc. Shared Workspace Agent The shared workspace agent transmits shared objects (e.g., telepointer coordinate, graphical or textual object) among the shared application. 16.5.2 Session Control Each session is described through the session state. This state information is either private or shared among all session participants. Dependent on the functions, which an application required and a session control provides, several control mechanisms are embedded in session management: Floor control: In a shared workspace, the floor control is used to provide access to the shared workspace. The floor control in shared application is often used to maintain data consistency. Conference Control: In conferencing applications, Conference control is used.
Multimedia Systems- M.Sc(IT)
140 Media control: This control mainly includes a functionality such as the synchronization of media streams. Configuration Control: Configuration control includes a control of media quality, QOS handling, resource availability and other system components to provide a session according to users requirements.
Membership control: This may include services, for example invitation to a session, registration into a session, modification of the membership during the session etc.
16.9 References
1.Multimedia: Concepts and Practice" By Stephen McGloughlin 2.Digital Multimedia by Nigel Chapman, Jenny Chapman 3.Multimedia : Computing, Communications and applications By Ralf Steinmetz and Klara Nahrstedt
Multimedia Systems- M.Sc(IT)
141
17.1 Introduction
In order to distribute a multimedia product, it is necessary to have a set of protocols which enable smooth transmission of data between two hosts. The protocols are the rules and conventions useful for network communication between two computers.
142 User and Application Requirements Network multimedia applications by themselves impose new requirements onto data handling in computing and communications because they need (1) substantial data throughput, (2) fast date forwarding, (3) service guarantees, and (4) multicasting. Data Throughput Audio and video data resemble a stream-like behavior, and they demand, even in a compressed mode, high data throughput. In a workstation or network, several of those streams may exist concurrently demanding a high throughput. Further, the date movement requirements on the local end-system translate into terms of manipulation of large quantities of data in real-time where, for example, data copying can create a bottleneck in the system. Fast Data Forwarding Fast data forwarding imposes a problem on end-systems where different applications exist in the same end-system, and they each require data movement ranging from normal, error-free data transmission to new time-constraint traffic types. But generally, the faster a communication system can transfer a data packet, the fewer packets need to be buffered. The requirement leads to a careful spatial and temporal resource management in the end-systems and routers/switches. The application imposes constraints on the total maximal end-toend delay. In a retrieval-like application, such as video-on-demand, a delay of up to one second may be easily tolerated. In an opposite dialogue application, such as a videophone or videoconference, demand end-to-end delays lower that typically 200 m/sec inhibit a natural communication between the users. Service Guarantees Distributed multimedia applications need service guarantees; otherwise their acceptance does not come through as these systems, working with continuous media, compete against radio and television services. To achieve services guarantees, resource management must be used. Without resource management
in end-systems and switches/routers, multimedia systems cannot provide reliable QoS to their users because transmission over unreserved resources leads to dropped or delayed packets. Multicasting Multicast is important for multimedia-distributed applications in terms of sharing resources like the network bandwidth and the communication protocol processing at end-systems.
Multimedia Systems- M.Sc(IT)
143 Processing and Protocol Constraints Communication protocols have, on the contrary, some constraints which need to be considered when we want to match application requirements to system platforms. A typical multimedia application does not require processing of audio and video to be performed by the application itself. Usually the data are obtained from a source (e.g., microphone, camera, disk, and network) and are forwarded to a sink (e.g., speaker, display, and network). In such a case, the requirements of continuous-media data are satisfied best if they take take shortest possible path through the system, i.e., to copy data directly from adapter-to-adapter, and the program merely sets the correct switches for the data flow by connecting sources to sinks. Hence, the application itself never really touches the data as is the case in traditional processing. A problem with direct copying from adapter-to-adapter is the control and the change of QoS parameters. In multimedia systems, such an adapter-to-adapter connection is defined by the capabilities of the two involved adapters and the bus performance. In todays systems, this connection is static. This architecture of lowlevel data streaming corresponds to proposals for using additional new busses for audio and video transfer within a computer. It also enables a switch-based rather than bus-based data transfer architecture. Note, in practice we encounter headers and trailers surrounding continuous-media data coming from devices and being delivered to devices. In the case of compressed video data, e.g., MPEG-2, the program stream contains several layers of headers compared with the actual group of pictures to be displayed. Protocols involve a lot of data movement because of the layered structure of the communication architecture. But copying of data is expensive and has become a bottleneck, hence other mechanisms for buffer management must be found. Different layers of the communication system may have different PDU sizes; therefore, a segmentation and reassembly occur. This phase has to be done fast, and efficient. Hence, this portion of a protocol stack, at least in the lower layers, is done in hardware, or through efficient mechanisms in software.
144 17.3.1 Internet Transport Protocols The Internet protocol stack includes two types of transport protocols: Transmission Control Protocol(TCP) Early implementations of video conferencing applications were implemented on top of the TCP protocol. TCP provides a reliable, serial communication path, or virtual circuit, between processes exchanging a full-duplex stream of bytes. Each process is assumed to reside in an internet host that is identified by an IP address. Each process has a number of logical, full-duplex ports through which it can set up and use as full-duplex TCP connections. Multimedia applications do not always require full-duplex connections for the transport of continuous media. An example is a TB broadcast over LAN, which requires a full-duplex control connection, but often a simplex continuous media connection is sufficient. During the data transmission over the TCP connection, TCP must achieve reliable, sequenced delivery of a stream of bytes by means of an underlying, unreliable datagram service. To achieve this, TCP makes use of retransmission on timeouts and positive acknowledgments upon receipt of information. Because retransmission can cause both out-of-order arrival and duplication of data., sequence numbering is crucial. Flow control in TCP makes use of a window technique in which the receiving side of the connection reports to the sending side the sequence numbers it may transmit at any time and those it has received contiguously thus far. For multimedia, the positive acknowledgment causes substantial overhead as all packets are sent with a fixed rate. Negative acknowledgment would be a better strategy. Further, TCP is not suitable for real-time video and audio transmission because its retransmission mechanism may cause a violation of deadlines which disrupt the continuity of the continuous media streams. TCP was designed as a transport protocol suitable for non-real-time reliable applications, such as file transfer, where it performs the best. User Datagram Protocol(UDP) UDP is a simple extension to the Internet network protocol IP that supports multiplexing of datagrams exchanged between pairs of Internet hosts. It offers only multiplexing and checksumming, nothing else. Higher-level protocols using UDP must provide their own retransmission, packetization, reassembly, flow control, congestion avoidance, etc. Many multimedia applications use this protocol because it provides to some degree the real-time transport property, although loss of PDUs may occur.
Multimedia Systems- M.Sc(IT)
145 For experimental purposes, UDP above IP can be used as a simple, unreliable connection for medium transport. In general, UDP is not suitable for continuous media streams because it does not provide the notion of connections, at least at the transport layer; therefore, different service guarantees cannot be provided. Real-time Transport Protocol(RTP)
RTP is an end-to-end protocol providing network transport function suitable for applications transmitting real-time data, such as audio, video or simulation data over multicast or unicast network services. It is specified and still augmented by the Audio/Video Transport Working Group. RTP is primarily designed to satisfy the needs of multi-party multimedia conferences, but it is not limited to that particular application. RTP has a companion protocol RTCP (RTP-Control Protocol) to convey information about the participants of a conference. RTP provides functions, such as determination of media encoding, synchronization, framing, error detection, encryption, timing and source identification. RTCP is used for the monitoring of QoS and for conveying information about the participants in an ongoing session. The first aspect of RTCP, the monitoring is done by an application called a QoS monitor which receives the RTCP messages. RTP does not address resource reservation and does not guarantee QoS for realtime services. This means that it does not provide mechanism to ensure timely delivery of data or guaranteed delivery, but relies on lower-layer services to do so. RTP makes use of the network protocol ST-II or UDP/IP for the delivery of data. It relies on the underlying protocols(s) to provide demultiplexing. Profiles are used to specify certain parts of the header for particular sets of applications. This means that particular media information is stored in an audio/video profile, such as a set of formats (e.g., media encodings) and a default mapping of those formats. Xpress Transport Protocol (XTP) XTP was designed to be an efficient protocol, taking into account the low error ratios and higher speeds of current networks. It is still in the process of augmentation by the XTP Form to provide a better platform for the incoming variety of applications. XTP integrates transport and network protocol functionalities to have more control over the environment in which it operates. XTP is intended to be useful in a wide variety of environments, from real-time control systems to remote procedure calls in distributed operating systems and distributed databases to bulk data transfer. It defines for this purpose six service
Multimedia Systems- M.Sc(IT)
146 types: connection, transaction, unacknowledged data gram, acknowledged datagram, isochronous stream and bulk data. In XTP, the end-user is represented by a context becoming active within an XTP implementation. 17.3.2 Other Transport Protocols Some other designed transport protocols which are used for multimedia transmission are: Tenet Transport Protocols The Tenet protocol suite for support of multimedia transmission was developed by the Tenet Group at the University of California at Berkeley. The transport protocols in this protocol stack are the Teal-time Message Transport Protocol (RMTP) and Continuous Media Transport Protocol (CMTP). They run above the Real-Time Internet Protocol (RTIP).
Heidelberg Transport System (HeiTS) The Heidelberg Transport system (HeiTS) is a transport system for multimedia communication. It was developed at the IBM European Networking Center (ENC), Heidelberg. HeiTS provides the raw transport of multimedia over networks. METS: A Multimedia Enhanced Transport Service METS is the multimedia transport service developed at the University of Lancaster. It runs on top of ATM networks. The transport protocol provides an ordered, but non-assured, connection oriented communication service and features resource allocation based on the users QoS specification. It allows the user to select upcalls for the notification of corrupt and lost data at the receiver, and also allows the user to re-negotiate QoS levels. Check Your Progress 1 Write a brief account on protocols used for multimedia content transmission. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
147
maximize reliability, minimize monetary cost and normal service. Any assertion of TOS can only be used if the network into which an IP packet is injected has a class of service that matches the particular combination of TOS markings selected. Internet Group Management Protocol ( IGMP) Internet Group Management protocol (ICMP) is a protocol for managing Internet multicasting groups. It is used by conferencing applications to join and leave particular multicast group. The basic service permits a source to send datagrams to all members of a multicast group. There are no guarantees of the delivery to any or all targets in the group.
Multimedia Systems- M.Sc(IT)
148 Multicast routers periodically send queries (Host Membership Query Messages) to refresh their knowledge of memberships present on a particular network. If no reports are received for a particular group after some number of queries, the routers assume the group has no local members, and that they need not forward remotely originated multicasts for that group onto the local network. Otherwise, hosts respond to a query by generating reports (Host Membership Reports), reporting each host group to which they belong on the network interface from which the query was received. To avoid an impulsion of concurrent reports there are two possibilities: either a host, rather than sending reports immediately, delays for a D-second interval the generation of the report; or, a report is sent with an IP destination address equal to the host group address being reported. This causes other members of the same group on the network to overhear the report and only one report per group is presented on the network. Resource reservation Protocol (RSVP) RSVP is a protocol which transfers reservations and keeps a state at the intermediate nodes. It does not have a data transfer component. RSVP messages are sent as IP datagrams, and the router keeps soft state, which is refreshed by periodic reservation messages. In the absence of the refresh messages, the routers delete the reservation after a certain timeout. This protocol was specified by IETF to provide one of the components for integrated services on the Internet. To implement integrated services, four components need to be implemented: the packet scheduler, admission control routine, classifier, and the reservation setup protocol. 17.4.2 STream protocol, Version 2 (ST-II) ST-II provides a connection-oriented, guaranteed service for data transport based on the stream model. The connections between the sender and several receivers are setup as uni-directional connections, although also duplex connections can be setup. ST-II is an extension of the original ST protocol. It consists of two components: the ST Control Message Protocol (SCMP), which is a reliable, connectionless transport for the protocol messages and the ST protocol itself, which is an unreliable transport for the data.
forwarding, (3) service guarantees, and (4) multicasting Transport protocols, to support multimedia transmission, need to have new features and provide the following function, semi-reliability, multicasting, NAK (None-Acknowledgment)-based error recovery mechanism and rate control
Multimedia Systems- M.Sc(IT)
149 Transport protocols, such as TCP and UDP, are used in the Internet protocol stack for multimedia transmission, the new emerging transport protocols, such as RTP, XTP and other protocols, are suitable for multimedia.
17.8 References
1. Multimedia : Computing, Communications and applications By Ralf Steinmetz and Klara Nahrstedt 2. Multimedia Networking: Technology, Management and Applications - by Syed Mahbubur Rahman 3. Wireless Multimedia Network Technologies - by Rajamani Ganesh, Kaveh Pahlavan, Zoran Zvonar 4. Wireless Communication Technologies: New Multimedia Systems - by orihiko Morinaga, Seiichi. Sampei, Ryuji Kohno 5. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
Multimedia Systems- M.Sc(IT)
150
18.1 Introduction
Every product is expected to have a quality apart from satisfying the requirements. The quality is measured by various parameters. Parameterization of the services is defined in ISO (International Standard Organization) standards through the notion of Quality of Service (QoS). The ISO standard defines QoS as a concept for specifying how good the offered networking services are. QoS can be characterized by a number of specific parameters. There are several important issues which need to be considered with respect to QoS:
151 QoS Layering Traditional QoS (ISO standards) was provided by the network layer of the communication system. An enhancement of QoS was achieved through inducing QoS into transport services. For MCS, the QoS notion must be extended because many other services contribute to the end-to-end service quality. To discuss further QoS and resource management, we need a layered model of the MCS with respect to QoS, we refer throughout this lesson the model shown in the following Figure. The MCS consists of three layers: application, system(including communication services and operating system services), and devices (network and Multimedia (MM) devices). Above the application may or may not reside a human user. This implies the introduction of QoS in the application (application QoS), in the system (system QoS) and in the network (network QoS). In the case of having a human user, the MCS may also have a user QoS specification. We concentrate in the network layer on the network device and its QoS because it is of interest to us in the MCS. The MM devices find their representation (partially) in application QoS. QoS Description The set of chosen parameters for the particular service determines what will be measured as the QoS. Most of the current QoS parameters differ from the parameters described in ISO because of the variety of applications, media sent and the quality of the networks and end-systems. This also leads to many different QoS parameterizations in the literature. We give here one possible set of QoS parameters for each layer of MCS. Application QoS User User QoS Application System (Operating and Communication System) System QoS
152 The application QoS parameters describe requirements on the communication services and OS services resulting from the application QoS. They may be specified in terms of both quantitative and qualitative criteria. Quantitative criteria are those which can be evaluated in terms of certain measures, such as bits per second, number of errors, task processing time, PDU size, etc. The QoS parameters include throughput, delay, response time, rate, data corruption at the system level and task and buffer specification.
18.3 Translation
It is widely accepted that different MCS components require different QoS parameters, for example, the man loss rate, known from packet networks, has no meaning as a QoS video capture device. Likewise, frame quality is of little use to a link layer service provider because the frame quality in terms of number of pixels in both axes in a QoS value to initialize frame capture buffers. We always distinguish between user and application, system and network with different QoS parameters. However, in future systems, there may be even more layers or there may be hierarchy of layers, where some QoS values are inherited and others are specific to certain components. In any case, it must always be possible to derive all QoS values from the user and application QoS values. This derivation-known as translationmay require additional knowledge stored together with the specific component. Hence, translation is an additional service for layer-to-layer communication during the call establishment phase. The split of parameters, requires translation functions as follows: Human Interface-Application QoS The service which may implement the translation between a human user and application QoS parameters is called tuning service. A tuning service provides a user with a Graphical user Interface (GUI) for input of application QoS, as well as output of the negotiated application QoS. The translation is represented through video and audio clips (in the case of audio-visual media), which will run at the negotiated quality corresponding to, for example, the video frame resolution that end-system and the network can support. Application QoS-System QoS Here, the translation must map the application requirements into the system QoS parameters, which may lead to translation such as from high quality synchronization user requirement to a small (milliseconds) synchronization skew QoS parameter, or from video frame size to transport packet size. It may also be connected with possible segmentation/reassembly functions.
Multimedia Systems- M.Sc(IT)
153 System QoS-Network QoS This translation maps the system QoS (e.g., transport packet en-to-end delay) into the underlying network QoA parameters (e.g., in ATM, the end-to-end delay of cells) and vice versa.
154 Virtual Clock This discipline emulates Time Division Multiplexing (TDCM). A virtual transmission time is allocated to each packet. It is the time at which the packet would have been transmitted, if the server would actually be doing TDM. Delay Earliest-Due-Date (Delay EDD) Delay EDD is an extension of EDF scheduling (Earliest Deadline First) where the server negotiates a service contract with each source. The contract states that if a source obeys a peak and average sending rate, them the server provides bounded delay. The key then lies in the assignment of deadlines to packets. The server sets a packets deadline to the time at which it should be sent, if it had been received according to the contract. This actually is the expected arrival time added
to the delay bound at the server. By reserving bandwidth at the peak rate, Delay EDD can assure each channel a guaranteed delay bound. Jitter Earliest-Due-Date (Jitter EDD) Jitter EDD extends Delay EDD to provide delay-jitter bounds. After a packet has been served at each server, it is stamped with the difference between its deadline and actual finishing time. A regulator at the entrance of the next switch holds the packet for this period before it is made eligible to be scheduled. This provides the minimum and maximum delay guarantees. Stop-and-Go This discipline preserves the smoothness property of the traffic as it traverses through the network. The main idea is to treat all traffic as frames of length T bits, meaning the time is divided into frames. At each frame time, only packets tht have arrived at the server in the previous frame time are sent. It can be shown tht the delay and delay-jitter are bounded, although the jitter bound does not come free. The reason is that under Stop-and-Go rules, packets arriving at the start of an incoming frame must be held by full time T before being forwarded. So, all the packets that would arrive quickly are instead being delayed. Further, since the delay and delay-jitter bounds are linked to the length of the frame time, improvement of Stop-and-Go can be achieved using multiple frame sizes, which means it may operate with various frame sizes. Hierarchical Round Robin (HRR) An HRR server has several service levels where each level provides round robin service to a fixed number of slots. Some number of slots at a selected level are allocated to a channel and the server cycles through the slots at each level. The time a server takes to service all the slots at a level is called the frame time at the level. The key of HRR is that it gives each level a constant share of the
Multimedia Systems- M.Sc(IT)
155 bandwidth. Higher levels get more bandwidth than lower levels, so the frame time at a higher level is smaller than the frame time at a lower level. Since a server always completes one round through its slots once every frame time, it can provide a maximum delay bound to the channels allocated to that level. Check Your Progress 1 Enumerate a few rate-based scheduling disciplines : Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
control reservation protocol, which accompanies the IP protocol and provides some kind of connection along the path where resources are allocated. QoS description, distribution, provision and connected resource admission, reservation, allocation and provision must be embedded in different components of the multimedia communication architecture. This means that proper services and protocols in the end-points and the underlying network architectures must be provided. Especially, the system domain needs to have QoS and resource management. Several important issues, as described in detail in previous sections, must be considered in the end-point architectures: (1) QoS specification, negotiation and provision; (2) resource admission and reservation for end-to-end QoS; and, (3) QoS configurable transport systems. Some examples of architectural choices where QoS and resource management are designed and implemented include the following: 1. The OSI architecture provides QoS in the network layer and some enhancements in the transport layer. The OSI 95 project considers integrated QoS specification and negotiation in the transport protocols . 2. Lancasters QoS-Architecture (QoS-A) offers a framework to specify and implement the required performance properties of multimedia applications over
Multimedia Systems- M.Sc(IT)
156 high-performance ATM-based networks. QoS-A incorporates the notions of flow, service contract and flow management. The Multimedia Enhanced Transport Service (METS) provides the functionality to contract QoS. 3. The Heidelberg Transport System (HeiTs), based on ST-II network protocol, provides continuous-media exchange with QoS guarantees, upcall structure, resource management and real-time mechanisms. HeiTS transfers continuous media data streams from one origin to one or multiple targets via multicast. HeiTS nodes negotiate QoS values by exchanging flow specification to determine the resources required-delay, jitter, throughput and reliability. 4. The UC Berkeleys Tenet Protocol Suite with protocol set RCAP, RTIP, RMTP and CMTP provides network QoS negotiation, reservation and resource administration through the RCAP control and management protocol. 5. The Internet protocol stack, based on IP protocol, provides resource reservation if the RSVP control protocol is used. 6. QoS handling and management is provided in UPenns end-point architecture (OMEGA Architecture) at the application and transport subsystems, where the QoS Broker, as the end-to-end control and management protocol, implements QoS handling over both subsystems and relies on control and management in ATM networks. 7. The Native-Mode ATM Protocol Stack, developed in the IDLInet (IIT Delhi Low-cost Integrated Network) tested at the Indian Institute of technology, provides network QoS guarantees.
Parameterization of the services is defined in ISO (International Standard Organization) standards through the notion of Quality of Service (QoS). The MCS consists of three layers: application, system(including communication services and operating system services), and devices (network and Multimedia (MM) devices). The QoS parameters include throughput, delay, response time, rate, data corruption at the system level task and buffer specification.
157
18.9 References
1. Multimedia : Computing, Communications and applications By Ralf Steinmetz and Klara Nahrstedt 2. "Multimedia:Concepts and Practice" By Stephen McGloughlin
Multimedia Systems- M.Sc(IT)
158
Lesson 19 Synchronisation
Contents
19.0 Aims and Objectives 19.1 Introduction 19.2 Notion of Synchronisation 19.3 Basic Synchronisation issues 19.4 Intra and Inter Object synchronisation 19.5 Lip synchronisation requirements 19.6 Pointer Synchronisation requirements 19.7 Reference model for Multimedia synchronisation 19.8 Synchronisation specification 19.9 Let us sum up 19.10 Lesson-end activities 19.11 Model answers to Check your progress 19.12 References
lesson the learner will be able to : i) Know the Meaning of synchronisation ii) Synchronisation in audio iii) Implementation of a Reference model for synchronisation
19.1 Introduction
Advanced multimedia systems are characterized by the integrated computercontrolled generation, storage, communication, manipulation and presentation of independent time-dependent and time-independent media. The key issue which provides integration is the digital representation of any data and the synchronization of and between various kinds of media and data. The word synchronization refers to time. Synchronization in multimedia systems refers to the temporal relations between media objects in the multimedia system. In a more general and widely used sense some authors use synchronization in multimedia systems as comprising content, spatial and temporal relations between media objects. We differentiate between time-dependent media object are equal, it is called continuous media object. A video consists of a number of ordered frames; each of these frames has fixed presentation duration. A time-independent media object is any kind of traditional media like text and images. The semantic of the respective content does not depend upon a presentation according to the time domain.
Multimedia Systems- M.Sc(IT)
159 Synchronization between media objects comprises relations between timedependent media objects and time-independent media objects. A daily example of synchronization between continuous media is the synchronization between the visual and acoustical information in television. Ina multimedia system, the similar synchronization must be provided for audio and moving pictures. Synchronization is addressed and supported by many system components including the operating system, communication systems, database, and documents and even often by applications. Hence, synchronization must be considered at several levels in a multimedia system.
160 Integrated digital systems can support all types of media, and due to digital processing, may provide a high degree of media integration. Systems that handle time dependent analog media objects and time-independent digital media objects are called
hybrid systems. The disadvantage of hybrid systems is that they are restricted with regard to the integration of time-dependent and time-independent media, because, for example, audio and video are stored on different devices than-independent media objects and multimedia workstations must comprise both types of devices.
161
Intra-Object synchronization between frames of a video sequence showing a jumping ball Audio 1 Video P1 P2 P3 Audio 2 Animation Inter-Object synchronization example that shows temporal relations in multimedia presentation including audio, video, animation and picture objects
Multimedia Systems- M.Sc(IT)
162
163 The following example shows aspects of live synchronization: Two persons located at different sites of a company discuss a new product. Therefore, they use a video conference application for person-to-person discussion. In addition, they share a blackboard where they can display parts of the product and they can point with their mouse pointers to details of these parts and discuss some issues like: This part is designed to In the case of synthetic synchronization, temporal relations have been assigned to media
objects that were created independently of each other. The synthetic synchronization is often used in presentation and retrieval-based systems with stored data objects that are arranged to provide new combined multimedia objects. A media object may be part of several multimedia objects.
164
next higher layer to implement an interface. Higher layers offer higher programming and Quality of Service (QoS) abstractions. For each layer, typical objects and operations on these objects are described in the following. The semantics of the objects and operations are the main criteria for assigning them to one of the layers. Media Layer: At the media layer, an application operated on a single continuous media stream, is treated as a sequence of LDUs. Multimedia Application Specification Layer Object Layer Stream Layer Media Layer Figure showing four layer reference models High Low Abstraction
Multimedia Systems- M.Sc(IT)
165 Stream Layer: The stream layer operates on continuous media streams, as well as on groups of media streams. In a group, all streams are presented in parallel by using mechanisms for interstream synchronization. The abstraction offered by the stream layer is the notion of streams with timing parameters concerning the QoS for intrastream synchronization in a stream and interstream synchronization between streams of a group. Continuous media is seen in the stream layer as a data flow with implicit time constraints; individual LDUs are not visible. The streams are executed in a Real-Time Environment (RTE), where all processing is constrained by well-defined time specifications. Object Layer: The object layer operates on all types of media and hides the differences between discrete and continuous media. The abstraction offered to the application is that of a complete, synchronized presentation. This layer takes a synchronization specification as input and is responsible for the correct schedule of the overall presentation. The task of this layer is to close the gap between the needs for the execution of a synchronized presentation and the stream-oriented services. The functions located at the object layer are to compute and execute complete presentation schedules that include the presentation of the no-continuous media objects and the calls to the stream layer. Specification Layer: The specification layer is an open layer. It does not offer an explicit interface. This layer contains applications and tools that are allowed to create synchronization specifications. Such tools are synchronization editors, multimedia documents editors and authoring systems. Also located at the specification layer are tools for converting specifications to an object layer format. The specification layer is also responsible for mapping QoS requirements of the user level to the qualities offered at the object layer interface. Synchronization specification methods can be classified into the following main categories: Interval-based specifications, which allow the specification of temporal relations
between the time intervals of the presentations of media objects. Axes-based specifications, which relate presentation events to axes that are shared by the objects of the presentation. Control flow-based specifications, in which at given synchronization points, the flow of the presentations is synchronized. Event-based specifications, in which events in the presentation of media trigger presentation actions.
Multimedia Systems- M.Sc(IT)
166 Check Your Progress 1 Write down the four layers present in the synchronisation reference mode. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
167
19.12 References
1. Multimedia : Computing, Communications and applications By Ralf Steinmetz and Klara Nahrstedt 2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 3. Multimedia Communications: Directions and Innovations By Jerry D. Gibson
Multimedia Systems- M.Sc(IT)
168
20.1 Introduction
A multimedia networking system allows for the data exchange of discrete and continuous media among computers. This communication requires proper services and protocols for data transmission. Multimedia networking enables distribution of media to different workstation.
service specification contains no information concerning any aspects of the implementation. A protocol consists of a set of rules which must be followed by peer layer instances during any communication between these two peers. It is comprised of the formal
Multimedia Systems- M.Sc(IT)
169 (syntax) and the meaning (semantics) of the exchanged data units (Protocol Data Units = PDUs). The peer instances of different computers cooperate together to provide a service. Multimedia communication puts several requirements on services and protocols, which are independent from the layer in the network architecture. In general, this set of requirements depends to a large extent on the respective application. However, without defining a precise value for individual parameters, the following requirements must be taken into account: Audio and video data process need to be bounded by deadlines or even defined by a time interval. The data transmission-both between applications and transport layer interfaces of the involved components-must follow within the demands concerning the time domains. End to-end jitter must be bounded. This is especially important for interactive applications such as the telephone. Large jitter values would mean large buffers and higher end-to-end delays. All guarantees necessary for achieving the data transfer within the required time span must be met. This includes the required processor performance, as well as the data transfer over a bus and the available storage for protocol processing. Cooperative work scenarios using multimedia conference systems are the main application areas of multimedia communication systems. These systems should support multicast connections to save resources. The sender instance may often change during a single session. Further, a user should be able to join or leave a multicast group without having to request a new connection setup,which needs to be handled by all other members of this group. The services should provide mechanisms for synchronizing different data streams, or alternatively perform the synchronization using available primitives implemented in another system component. The multimedia communication must be compatible with the most widely used communication protocols and must make use of existing, as well as future networks. Communication compatibility means that different protocols at least coexist and run on the same machine simultaneously. The relevance of envisaged protocols can only be achieved if the same protocols are widely used. Many of the current multimedia communication systems are, unfortunately, proprietary experimental systems. The communication of discrete data should not starve because of preferred or guaranteed video/audio transmission. Discrete data must be transmitted without any penalty. The fairness principle among different applications, users and workstations must be enforced. The actual audio/video data rate varies strongly. This leads to fluctuations of the data rate, which needs to be handled by the services.
170 20.2.1 Physical Layer The physical layer defines the transmission method of individual bits over the physical medium, such as fiver optics. For example, the type of modulation and bit-synchronization are important issues. With respect to the particular modulation, delays during the data transmission arise due to the propagation speed of the transmission medium and the electrical circuits used. They determine the maximal possible bandwidth of this communication channel. For audio/video data in general, the delays must be minimized and a relatively high bandwidth should be achieved. 20.2.2 Data Link Layer The data link layer provides the transmission of information blocks known as data frames. Further, this layer is responsible for access protocols to the physical medium, error recognition and correction, flow control and block synchronization. Access protocols are very much dependent on the network. Networks can be divided into two categories: those using point-to-point connections and those using broadcast channels, sometimes called multi-access channels or random access channels. In a broadcast network, the key issue is how to determine, in the case of competition, who gets access to the channel. To solve this problem, the Medium Access Control (MAC) sublayer was introduced and MAC protocols, such as the Timed Token Rotation Protocol and Carrier Sense Multiple Access with Collision Detection (CSMA/CD), were developed. Continuous data streams require reservation and throughput guarantees over a line. To avoid larger delays, the error control for multimedia transmission needs a different mechanism than retransmission because a late frame is a lost frame. 20.2.3. Network Layer The network layer transports information blocks, called packets, from one station to another. The transport may involve several networks. Therefore, this layer provides services such as addressing, internetworking, error handling, network management with congestion control and sequencing of packets. Again, continuous media require resource reservation and guarantees for transmission at this layer. A request for reservation for later resource guarantees is defined through Quality of Service (QoS) parameters, which correspond to the requirements for continuous data stream transmission. The reservation must be done along the path between the communicating stations.
Multimedia Systems- M.Sc(IT)
171 20.2.4. Transport Layer The transport layer provides a process-to-process connection. At this layer, the QoS, which is provided by the network layer, is enhanced, meaning that if the network service is poor, the transport layer has to bridge the gap between what the transport users want and what the network layer provides. Large packets are segmented at this layer and reassembled into their original size at the receiver. Error handling is based on process-to-process communication. 20.2.5 Session Layer In the case of continuous media, multimedia sessions which reside over one or
more transport connections, must be established. This introduces a more complex view on connection reconstruction in the case of transport problems. 20.2.6 Presentation Layer The presentation layer abstracts from different formats ( the local syntax) and provides common formats ( transfer syntax). Therefore, this layer must provide services for transformation between the application-specific formats and the agreedupon format. An example is the different representation of a number for Intel or Motorola processors. The multitude of audio and video formats also require conversion between formats. This problem also comes up outside of the communication components during exchange between data carriers, such as CD-ROMs, which store continuous data. Thus, format conversion is often discussed in other contexts. 20.2.7 Application Layer The application layer considers all application-specific services, such as file transfer service embedded in the file transfer protocol (FTP) and the electronic mail service. With respect to audio and video, special services for support of real-time access and transmission must be provided. Check Your Progress 1 List the different layers used in networking. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
172
network uses the CSMA/CD protocol for resolution of multiple access to the broadcast channel in the MAC sublayer-before data transmission begins, the network state is checked by the sender station. Each station may try to send its data only if, at that moment, no other station transmits data. Therefore, each station can simultaneously listen and send. Dedicated Ethernet Another possibility for the transmission of audio/video data is to dedicate a separate Ethernet LAN to the transmission of continuous data. This solution requires compliance with a proper additional protocol. Further, end-users need at least two separate networks for their communications: one for continuous data and another for discrete data. This approach makes sense for experimental systems, but means additional expense in the end-systems and cabling. Hub A very pragmatic solution can be achieved by exploiting an installed network configuration. Most of the Ethernet cables are not installed in the form of a bus system. They make up a star (i.e., cables radiate from the central room to each station). In this central room, each cable is attached to its own Ethernet interface.
Multimedia Systems- M.Sc(IT)
173 Instead of configuring bus, each station is connected via its own Ethernet to a hub. Hence, each station has the full Ethernet bandwidth available, and a new network for multimedia transmission is not necessary. Fast Ethernet Fast Ethernet, known as 100Base-T offers throughput speed of up to 100 Mbits/s, and it permits users to move gradually into the world of high-speed LANs. The Fast Ethernet Alliance, an industry group with more than 60 member companies began work on the 100-Mbits/s 100 Base-TX specification in the early 1990s. The alliance submitted the proposed standard to the IEEE and it was approved. During the standardization process, the alliance and the IEEE also defined a Media-Independent Interface (MII) for fast Ethernet, which enables it to support various cabling types on the same Ethernet network. Therefore, fast Ethernet offers three media options: 100 Base-T4 for half-duplex operation on four pairs of UTP (Unshielded Twisted Pair cable), 100 Base-TX for half-or full-0duplex operation on two pairs of UTP oR STP (Shielded Twisted Pair cable), and 100 Base-FX for half-and full-duplex transmission over fiber optic cable. Token Ring The Token Ring is a LAN with 4 or 16 Mbits/s throughput. All stations are connected to a logical ring. In a Token Ring, a special bit pattern (3-byte), called a token, circulates around the ring whenever all stations are idle. When a station wants to transmit a frame, it must get the token and remove it from the ring before transmitting. Ring interfaces have two operating modes: listen and transmit. In the listen mode, input bits are simply copied to the output. In the transmit mode, which is entered only after the token has been seized, the interface breaks the connection between the input and the output, entering its own data onto the ring. As the bits that were inserted and subsequently propagated around the ring come back, they are removed from the ring by the sender. After a station has finished transmitting the last bit of its last frame, it must regenerate the token. When the last bit of the frame has gone around and returned, it must be removed,
and the interface must immediately switch back into the listen mode to avoid a duplicate transmission of the data. Each station receives, reads and sends frames circulating in the ring according to the Token Ring MAC Sublayer Protocol (IEEE standard 8020.5). Each frame includes a Sender Address (SA) and a Destination Address (DA). When the sending station drains the frame from the ring, a Frame Status field is update, i.e., the A and C bits of the field are examined. Three combinations are allowed: A=0, C=0 : destination not present or not powered up. A=1, C=0 : destination present but frame not accepted. A=1, C=1 : destination present and frame copied.
Multimedia Systems- M.Sc(IT)
174
20.4 FDDI
The Fiber Distributed Data Interface (FDDI) is a high-performance fiver optic LAN, which is configured as a ring. It is often seen as the successor of the Token Ring IEEE 802.5 protocol. The standardization began in the American Standards Institute (ANSI in the group X3T9.5 in 1982. Early implementations appeared in 1988. Compared to the Token Ring, FDDI is more a backbone than a LAN only because it runs at 100 Mbps over distances up to 100 km with up to 500 stations. The Token Ring supports typically between 50-2050 stations. The distance of neighboring stations is less than 20 km in FDDI. The FDDI design specification calls for no more than one error in 20.5*10^10 bits. Many implementations will do much better. The FDDI cabling consists of two fiber rings, one transmitting clockwise and the other transmitting counter-clockwise. If either one breaks, the other can be used as backup. FDDI supports different transmission modes which are important for the communication of multimedia data. The synchronous mode allows a bandwidth reservation; the asynchronous mode behaves similar to the Token Ring protocol. Many current implementations support only the asynchronous mode. Before diving into a discussion of the different mode, we will briefly describe the topologies and FDDI system components. Restricted Priority 0 Asynchronous Non-restricted Synchronous Priority 7 Non-isochronous An overview of data transmission in FDDI
Multimedia Systems- M.Sc(IT)
175 20.4.1 Topology of FDDI The main topology features of FDDI are the two fiber rings, which operate in opposite directions (dual ring topology). The primary ring provides the data transmission, the secondary ring improves the fault tolerance. Individual stations can be but do not have to be connected to both rings. FDDI defines two classes of stations, A
and B: Any class A station (Dual Attachment Station) connects to both rings. It is connected either directly to a primary ring and secondary ring or via a concentrator to a primary and secondary ring. The class B station ( Single Attachment Station) only connects to one of the rings. It is connected via a concentrator to the primary ring. 20.4.2 FDDI Architecture FDDI includes the following components which are shown in the following Figure: PHYsical Layer Protocol (PHY) Is defined in the standard ISO 9314-1 Information processing Systems: Fiver Distributed Data Interface-Part 1: Token Ring Physical Protocol. Physical Layer Medium-Dependent (PMD) Is defined in the standard ISO 9314-1 Information Processing Systems: Fiver Distributed Data Interface-Part 1: Token Ring Physical Layer, Medium Dependent. Station Management (SMT) Defines the management functions of the ring according to ANSI Preliminary Draft Proposal American National Standard Z3T9.5/84-49 Revision 6.20, FDDI Station Management. Media Access Control (MAC) Defines the network access according to ISO 9314-20 Information Processing Systems: Fiber Distributed Data Interface-Part 20: Token Ring Media Access Control.
Multimedia Systems- M.Sc(IT)
176 Check Your Progress 2 Explain the different components of FDDI. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 20.4.3 Further properties of FDDI Multicasting : The multicasting service became one of the most important aspects of networking. FDDI supports group addressing, which enables multicasting. Synchronisation : Synchronisation among different data streams is not part of the network, therefore it must be solved separately
Managem ent MAC -Packet Interpretation -Token Passing -Packet Training PHY -Encode/Decode -Clocking PMD Electrons Photons
Physical Layer
177 Packet Size : The size of the packets can directly influence the data delay inapplications. Implementations: Many FDDI implementations do not support the synchronous mode, which is very useful for the transmission of continuous media. In asynchronous mode additionally, the same methods can be used as described by the token ring. Restricted tokens : If only two stations interact by transmitting continuous media data, then one can also use the asynchronous mode with Restricted token Several new protocols at the network/transport layers in Internet and higher layers in BISDN are currently centers of research to support more efficient transmission of multimedia and multiple types of service.
1. The protocol layers are Physical Layer Data Link Layer Network Layer Transport Layer Session Layer
Multimedia Systems- M.Sc(IT)
178 Presentation Layer Application Layer 2. FDDI Architect includes the following : PHYsical Layer Protocol (PHY) Physical Layer Medium-Dependent (PMD) Station Management (SMT) Media Access Control (MAC)
20.8 References
1. Multimedia : Computing, Communications and applications By Ralf Steinmetz and Klara Nahrstedt 2. Multimedia Networking: Technology, Management and Applications - by Syed Mahbubur Rahman 3. Wireless Multimedia Network Technologies - by Rajamani Ganesh, Kaveh Pahlavan, Zoran Zvonar 4. Wireless Communication Technologies: New Multimedia Systems - by Norihiko Morinaga, Seiichi. Sampei, Ryuji Kohno 5. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen
Multimedia Systems- M.Sc(IT)
179
UNIT - V
In this lesson we will learn the concepts behind Multimedia Operating System. Various issues related with handling of resources are discussed in this lesson.
21.1 Introduction
The operating system is the shield of the computer hardware against all software components. It provides a comfortable environment for the execution of programs, and it ensures effective utilization of the computer hardware. The operating system offers various services, related to the essential resources of a computer: CPU, main memory, storage and all input and output devices.
180 contrast to the traditional real-time operating systems, multimedia operating systems also have to consider tasks without hard timing restrictions under the aspect of fairness. The communication and synchronization between single processes must meet the restrictions of real-time requirements and timing relations among different media. The main memory is available as shared resource to single processes. In multimedia systems, memory management has to provide access to data with a guaranteed timing delay and efficient data manipulation functions. For instance, physical data copy operations must be avoided due to their negative impact on performance; buffer management operations (such as are known from communication systems) should be used. Database management is an important component in multimedia systems. However, database management abstracts the details of storing data on secondary media storage. Therefore, database management should rely on file management services provided by the multimedia operating system to access single files and file systems. Since the operating system shields devices from applications programs, it must provide services for device management too. In multimedia systems, the important issue is the integration of audio and video devices in a similar way to any other input/output device. The addressing of a camera can be performed similar to the addressing of a keyboard in the same system, although most current systems do not apply this technique.
receiving information from the environment, occurring spontaneously or in periodic time intervals, and/or delivering it to the environment given certain time constraints. 21.3.1 Characteristics of Real Time Systems The necessity of deterministic and predictable behavior of real-time systems requires processing guarantees for time-critical tasks. Such guarantees cannot be assured for events that occur at random intervals with unknown arrival times, processing requirements or deadlines. Predictably fast response to time-critical events and accurate timing information.
Multimedia Systems- M.Sc(IT)
181 A high degree of schedulability. Schedulability refers to the degree of resource utilization at which, or below which, the deadline of each time-critical task can be taken into account. Stability under transient overload. Under system overload, the processing of critical tasks must be ensured. 21.3.2 Real Time and Multimedia Audio and video data streams consist of single, periodically changing values of continuous media data, e.g., audio samples or video frames. Each Logical Data Unit (LDU) must be presented by a well-determined deadline. Jitter is only allowed before the final presentation to the user. A piece of music, for example, must be played back at a constant speed. To fulfill the timing requirements of continuous media, the operating system must use real-time scheduling techniques. These techniques must be applied to all system resources involved in the continuous media data processing, i.e., the entire end-to-end data path is involved. The real-time requirements of traditional real-time scheduling techniques (used for command and control systems in application areas such as factory automation or aircraft piloting) have a high demand for security and fault-tolerance. The fault-tolerance requirements of multimedia systems are usually less strict than those of real-time systems that have a direct physical impact. The short time failure of a continuous media system will not directly lead to the destruction of technical equipment or constitute a threat to human life. Please note that this is a general statement which does not always apply. For example, the support of remove surgery by video and audio has stringent delay and correctness requirements. For many multimedia system applications, missing a deadline is not a severe failure, although it should be avoided. It may even go unnoticed, e.g., if an uncompressed video frame (or parts of it) is not available on time it can simply be omitted. The viewer will hardly notice this omission, assuming it does not happen for a contiguous sequence of frames. A sequence of digital continuous media data is the result of periodically sampling a sound or image signal. Hence, in processing the data units of such a data sequence, all time-critical operations are periodic. The bandwidth demand of continuous media is not always that stringent; it must not be a priori fixed, but it may eventually be lowered. As some compression algorithms are capable of using different compression ratios leading to different qualities the required bandwidth can be negotiated. If not enough bandwidth is
available for full quality, the application may also accept reduced quality (instead of no service at all).
Multimedia Systems- M.Sc(IT)
182
183 1. The throughput is determined by the needed data rate of a connection to satisfy the application requirements. It also depends on the size of the data units. 2. It is possible to distinguish between local and global end-to-end delay : a. The delay at the resource is the maximum time span for the completion of a certain task at this resource.
b. The end-to-end delay is the total delay for a data unit to be transmitted from the source to its destination. For example, the source of a video telephone is the camera, the destination is the video window on the screen of the partner. 3. The jitter determines the maximum allowed variance in the arrival of data at the destination. 4. The reliability defines error detection and correction mechanism used for the transmission and processing of multimedia tasks. Errors can be ignored, indicated and/or corrected. It is important to notice that error correction through retransmission is rarely appropriate for time-critical data because the retransmitted data will usually arrive late. Forward error correction mechanisms are more useful. In accordance with communication systems, these requirements are also known as Quality of Service parameters (QoS). 21.4.3 Components of the Resources One possible realization of resource allocation and management is based on the interaction between clients and their respective resource managers. The client selects the resource and requests a resource allocation by specifying its requirements through a QoS specification. This is equivalent to a workload request. First, the resource manager checks its own resource utilization and decides if the reservation request can be served or not. All existing reservations are stored. This way, their share in terms of the respective resource capacity is guaranteed. Moreover, this component negotiates the reservation request with other resource managers, if necessary. In the following figure two computers are connected over a LAN. The transmission of video data between a camera connected to a computer server and the screen of the computer user involves, for all depicted components, a resource manager.
Multimedia Systems- M.Sc(IT)
184 21.4.4 Phases of the Resource Reservation and Management Process A resource manager provides components for the different phases of the allocation and management process: 1. Schedulability Test The resource manager checks with the given QoS parameters (e.g., throughput and reliability) to determine if there is enough remaining resource capacity available to handle this additional request. 2. Quality of Service Calculation After the schedulability test, the resource manager calculates the best possible performance (e.g., delay) the resource can guarantee for the new request. 3. Resource Reservation The resource manager allocates the required capacity to meet the QoS guarantees for each request.
Frame Grabber & Encoder Compression Communication
Transport & Network layer Data Link & Network Adapter Presentation in a Window Communication Transport & Network Layer Data Link & Network Adapter
Networks
DeCompression
Server Station User Station Components grouped for the purpose of video data transmission
Multimedia Systems- M.Sc(IT)
185 4. Resource Scheduling Incoming messages from connections are scheduled according to the given QoS guarantees. For process management, for instance, the allocation of the resource is done by the scheduler at the moment the data arrive for processing. Check Your Progress-1 List the phases available in the resource reservation. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 21.4.5 Resource Allocation Scheme Reservation of resources can be made either in a pessimistic or optimistic way: The pessimistic approach avoids resource conflicts by making reservations for the worst case, i.e., resource bandwidth for the longest processing time and the highest rate which might ever be needed by a task is reserved. Resource conflicts are therefore avoided. This leads potentially to an underutilization of resources. With the optimistic approach, resources are reserved according to an average workload only. This means that the CPU is only reserved for the average processing time. This approach may overlook resources with the possibility of unpredictable packet delays. QoS parameters are met as far as possible. Resources are highly utilized, though an overload situation may result in failure. To detect an overload situation and to handle it accordingly a monitor can be implemented. The monitor may, for instance, preempt processes according to their importance.
The optimistic approach is considered to be an extension of the pessimistic approach. It requires that additional mechanisms to detect and solve resource conflicts be implemented.
Multimedia Systems- M.Sc(IT)
186 Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
187
21.8 References
1. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen. 2. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan
Korst, Verus Pronk. 3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 4. Multimedia Making it work By Tay Vaughan.
Multimedia Systems- M.Sc(IT)
188
22.1 Introduction
One of the main activities of an Operating System is managing the multimedia processes and managing the processor. Effective management of processor is necessary to enhance the multimedia production and multimedia playback.
189 If all necessary resources are assigned to the process, it is ready to run. The process only needs the processor for the execution of the program.
A process is running as long as the system processor is assigned to it. The process manager is the scheduler. This component transfers a process into the ready-to-run state by assigning it a position in the respective queue of the dispatcher, which is the essential part of the operating system kernel. The dispatcher manages the transition from ready-to-run to run. In most operating systems, the next process to run is chosen according to priority policy. Between processes with the same priority, the one with the longest ready time is chosen.
190 There are several attempts to solve real-time scheduling problems. Many of them are just variations of basic algorithms. To find the best solutions for multimedia systems, two basic algorithms are analyzed, Earliest Deadline First Algorithm and Rate Monotonic Scheduling, and their advantages and disadvantages are elaborated. 22.4.1 Earliest Deadline First Algorithm The Earliest Deadline First (EDF) algorithm is one of the best known algorithms for real-time processing. At every new ready state, the scheduler selects the task with the earliest deadline among the tasks that are ready and not fully processed. The requested resource is assigned to the selected task. At any arrival of a new task EDF must be
computed immediately leading to a new order, i.e., the running task is preempted and the new task is scheduled according to its deadline. The new task is processed immediately if its deadline is earlier than that of the interrupted task. The processing of the interrupted task is continued according to the EDF algorithm later on. EDF is not only an algorithm for periodic tasks, but also for task with arbitrary requests, deadlines and service execution times. In this case, no guarantee about the processing of any task can be given. EDF is an optimal, dynamic algorithm, i.e., it produces a valid schedule whenever one exists. A dynamic algorithm schedules every instance of each incoming task according to its specific demands. Tasks of periodic processes must be scheduled in each period again. With n tasks which have arbitrary ready-times and deadlines, the complexity is (n2). EDF is used by different models as a basic algorithm. An extension of EDF is the Time-Driven Scheduler (TDS). Tasks are scheduled according to their deadlines. Further, the TDS is able to handle overload situations. If an overload situation occurs the scheduler aborts tasks which cannot meet their deadlines anymore. If there is still an overload situation, the scheduler removes tasks which have a low value density. The value density corresponds to the importance of a task for the system. Another priority-driven EDF scheduling algorithm is also introduced. Here, every task is divided into a mandatory and an optional part. A task is terminated according to the deadline of the mandatory part, even if it is not completed at this time. Tasks are scheduled with respect to the deadline of the mandatory parts. A set of task is said to be schedulable if all tasks can meet the deadlines of their fully utilized. 22.4.2 Rate Monotonic Algorithm The rate monotonic scheduling principle was introduced by Liu and Layland in 1973. It is an optimal, static, priority-driven algorithm for preemptive, periodic jobs. Optimal in this context means that there is no other static algorithm that is able to schedule a task set which cannot be scheduled by the rate monotonic algorithm. A process is scheduled by a static algorithm at the beginning of the processing. Subsequently, each task is processed with the priority calculated at the beginning. No further scheduling is required.
Multimedia Systems- M.Sc(IT)
191 The following five assumptions are necessary prerequisites to apply the rate monotonic algorithm. 1. The requests for all tasks with deadlines are periodic, i.e., have constant intervals between consecutive requests. 2. The processing of a single task must be finished before the next task of the same data stream becomes ready for execution. Deadlines consist of runability constraints only, i.e., each task must be completed before the next request occurs. 3. All tasks are independent. This means that the requests for a certain task do not depend on the initiation or completion of requests for any other task. 4. Run-time for each request of a task is constant. Run-time denotes the maximum time which is required by a processor to execute the task without interruption. 5. Any non-periodic task in the system has no required deadline. Typically, they initiate periodic tasks or are tasks failure recovery. They usually displace
periodic tasks. Static priorities are assigned to tasks, once at the connection set-up phase, according to their request rates. The priority corresponds to the importance of a task relative to other tasks. Tasks with higher request rates will have higher priorities. The task with the shortest period gets the highest priority and the task with the longest period gets the lowest priority. The rate monotonic algorithm is simple method to schedule time-critical, periodic tasks on the respective resource. A task will always meet its deadline, if this can be proven to be true for the longest response time. The response time is the time span between the request and the end of processing the task. This time span is maximal when all processes with a higher priority request to be processed at the same time. This case is known as the critical instant shown in the following figure. In this figure, the priority of a is, according to the rate monotonic algorithm, higher than b, and b is higher than c. The critical time zone is the time interval between the critical instant and the completion of a task.
Multimedia Systems- M.Sc(IT)
192 22.4.3 Other Approaches to Rate Monotonic Algorithm There are several approaches to this algorithm. One of them divides a task into a mandatory and an optional part. The processing of the mandatory part delivers a result which can be accepted by the user. The optional part only refines the result. The mandatory part is schedule according to the rate monotonic algorithm. For the scheduling of the optional part, other, different policies are suggested. In some systems there are aperiodic tasks next to periodic ones. To meet the requirements of periodic tasks and the response time requirements of a periodic requests, it must be possible to schedule both aperiodic and periodic tasks. If the aperiodic request is an aperiodic continuous stream (e.g video images as part of a slide show), we have the possibility to transform it into a periodic stream. Every timed data item can be substituted by n items. The new items have the duration of the minimal life span. The numbers of streams is increased, but since the life span is decreased, the semantic remains unchanged. The stream is now periodical because every item has the same life span. If the stream is not continuums, we can apply a sporadic server to respond to aperiodic request. The server is provided with a computation budget. This budget is refreshed t units of time after it has been exhausted. Earlier refreshing is also possible. The budget represents the computation time reserved for aperiodic tasks.
Critical instant of B Critical instant of B Critical instant of B Critical instant of B
193 Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. 22.4.4 Other Approaches for In-Time Scheduling Apart from the two methods previously discussed, further scheduling algorithms have been evaluated regarding their suitability for the processing of continuous media data. Least Laxity First (LLF). The laxity is the time between the actual time t and the dead-line minus the remaining processing time. The laxity in period k is : tk = (s + (k 1)p + d) (t + e) LLF is an optimal, dynamic algorithm for exclusive resources. Furthermore, it is an optimal algorithm for multiple resources if the ready-times of the real-time tasks are the same. The laxity is a function of al deadline, the processing time and the current time. Thereby, the processing time cannot be exactly specified in advance. When calculating the laxity, the worst case is assumed. Therefore, the determination of the laxity is inexact. The laxity of waiting processes dynamically changes over time. Deadline Monotone Algorithm. If the deadlines of tasks are less than their period (di < pi), the prerequisites of the rate monotonic algorithm are violated. In this case, a fixed priority assignment according to the deadlines of the tasks is optimal. A task Ti gets a higher priority than a task Tj if di < dj. No effective schedulability test for the deadline monotone algorithm exists. To determine the schedulability of a task set, each task must be checked if it meets its deadline in the worst case. In this case, all tasks require execution to their critical instant. Tasks with a deadline shorter than their period, for example, arise during the measurements of temperature or pressure control systems. In multimedia systems, deadlines equal to period lengths can be assumed. Shortest Job First (SJF). The task with the shortest remaining computation time is chosen for execution. This algorithm guarantees that as many tasks as possible meet their deadlines
Multimedia Systems- M.Sc(IT)
194 under an overload situation if all of them have the same deadline. In multimedia systems where the resource management allows overload situations this might be a suitable algorithm.
Operating System. We have touched upon the following points. Process management deals with allocation and managing the resource main processor. The process manager transfers a process into the ready-to-run state by assigning it a position in the respective queue of the dispatcher. The real time scheduling algorithms are Earliest Deadline First Algorithm, Rate Monotonic Algorithm. EDF is an optimal, dynamic algorithm. Rate monotonic algorithm is optimal, static, priority driven algorithm for preemptive periodic jobs
22.8 References
1. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 3. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst, Verus Pronk 4. Multimedia Making it work By Tay Vaughan
Multimedia Systems- M.Sc(IT)
195
23.1 Introduction
The file system is said to be the most visible part of an operating system. Most programs write or read files. Their program code, as well as user data, is stored in files. The organization of the file system is an important factor for the usability and convenience of the operating system. A file sequence is a sequence of information held as a unit for storage and use in a computer system.
196 The file system provides access and control functions for the storage and retrieval of files. From the users viewpoint, it is important how the file system allows file organization and structure. The internals, which are more important in our context, i.e., the organization of the file system, deal with the representation of information in files, their structure and organization in secondary storage. Traditional File Systems The two main goals of traditional files systems are: (1) to provide a comfortable interface for file access to the user and (2) to make efficient use of storage media.
197 must contain the number of blocks occupied by the file, the pointer to the first block and it may also have the pointer to the last block. A serious disadvantage of this method is the cost of the implementation for random access because all prior data must be read. In MS-DOS, a similar method is applied. A File Allocation Table (FAT) is associated with each disk.
One entry in the table represents one disk block. The directory entry of each file holds the block number of the first block. The number in the slot of an entry refers to the next block of a file. The slot of the last block of a file contains an end-of -file mark. Another approach is to store block information in mapping tables. Each file is associated with a table where, apart from the block numbers information like owner, file size, creation time, last access time, etc., is stored. Those tables usually have a fixed size, which means that number of block references is bounded. Files with more blocks are referenced indirectly by additional tables assigned to the files. In UNIX, a small table (on disk) called i-node is associated with each file (See the following figure ). The indexed sequential approach is an example for multi-level mapping; here, logical and physical organizations are not clearly separated. The UNIX - inode
Multimedia Systems- M.Sc(IT)
198 Directory Structure Files are usually organized in directories. Most of todays operating systems provide tree-structured directories where the user can organize the files according to his/her personal needs. In multimedia systems, it is important to organize the files in a way that allows easy, fast, and contiguous data access. Disk Management Disk access is a slow and costly transaction. In traditional systems, a common technique to reduce disk access is block caches. Using block cache, blocks are kept in memory because it is expected that future read or write operations access these data again. Thus, performance is enhanced due to shorter access time. Another way to enhance performance is to reduce disk arm motion. Blocks that are likely to be accessed in sequence are placed together on one cylinder. To refine this method, rotational positioning can be taken into account. Consecutive blocks are placed on the same cylinder, but in an interleaved way as shown in the following figure. Another important issue is the placement of the mapping tables (e.g., I-nodes in UNIX) on the disk. If they are placed near the beginning of the disk, the distance between them and the blocks will be, on average, half the number of cylinders. To improve this, they can be placed in the middle of the disk. Hence, the average seek time is roughly reduced by a factor or two. In the same way, consecutive blocks should be placed on the same cylinder. The use of the same cylinder for the storage of mapping tables and referred blocks also improves performance. Interleaved and non-interleaved storage
Multimedia Systems- M.Sc(IT)
199
block is determined by: The seek time (the time required for the movement of the read/write head). The latency time or rotational delay (the time during which the transfer cannot proceed until the right block or sector rotates under the read/write head). The actual data transfer time needed for the data to copy from disk into main memory. Usually the seek time is the largest factor of the actual transfer time. Most systems try to keep the cost of seeking low by applying special algorithms to the scheduling of disk read/write operations. The access of the storage device is a problem greatly influenced by the file allocation method. Most systems apply one of the following scheduling algorithms: First-Come-First-Served (FCFS) With this algorithm, the disk driver accepts requests one-at-a-time and serves them in incoming order. This is easy to program and an intrinsically fair algorithm. However, it is not optimal with respect to head movement because it does not consider the location of the other queued requests. These results in a high average seek time. Shortest-Seek-Time First (SSTF) At every point in time, when a data transfer is requested, SSTF selects among all requests the one with the minimum seek time from the current head position. Therefore, the head is moved to the closest track in the request queue. This algorithm was developed to minimize seek time and it is in this sense optimal. SSTF is a modification of Shortest Job First (SJF), and like SJF, it may cause starvation of some requests. Request targets in the middle of the disk will get immediate service at the expense of requests in the innermost and outermost disk areas. SCAN Like SSTF, SCAN orders requests to minimize seek time. In contrast to SSTF, it takes the direction of the current disk movement into account. It first serves all requests in one direction until it does not have any requests in this direction anymore. The head movement is then reversed and service is continued. SCAN provides a very good seek time because the edge tracks get better service
Multimedia Systems- M.Sc(IT)
200 times. Note that middle tracks still get a better service then edge tracks. When the head movement is reversed, it first serves tracks that have recently been serviced, where the heaviest density of requests, assuming a uniform distribution, is at the other end of the disk. C-SCAN C-SCAN also moves the head in one direction, but it offers fairer service with more uniform waiting times. It does not alter the direction, as in SCAN. Instead, it scans in cycles, always increasing or decreasing, with one idle head, movement from one edge to the other between two consecutive scans. The performance of C-SCAN is somewhat less than SCAN.
devices have become only marginally faster. The effect of this increasing speed mismatch is the search for new storage structures, and storage and retrieval mechanisms with respect to the file system. Continuous media data are different from discrete data in: Real Time Characteristics As mentioned previously, the retrieval, computation and presentation of continuous media is time-dependent. The data must be presented (read) before a well-defined deadline with small jitter only. File Size Compared to text and graphics, video and audio have very large storage space requirements. Since the file system has to store information ranging from small, unstructured units like text files to large, highly structured data units like video and associated audio, it must organize the data on disk in a way that efficiently uses the limited storage. Multiple Data Streams A multimedia system must support different media at one time. It does not only have to ensure that all of them get a sufficient share of the resources, it also must consider tight relations between streams arriving from different sources. There are different ways to support continuous media in file systems. Basically there are two approaches. With the first approach, the organization of files on disk remains as is. The necessary real-time support is provided through special disk scheduling algorithms and sufficient buffer to avoid jitter. In the second approach, the organization of audio and video files on disk is optimized for their use
Multimedia Systems- M.Sc(IT)
201 in multimedia systems. Scheduling of multiple data streams still remains an issue of research.
SCAN-Earliest Deadline First The SCAN-EDF strategy is a combination of the SCAN and EDF mechanisms. The seek optimization of SCAN and the real-time guarantees of EDF are combined in the following way: like in EDF, the request with the earliest deadline is always served first; among requests with the same deadline, the specific one that is first according to the scan direction is served first; among the remaining requests, this principle is repeated until no request with this deadline is left. Group Sweeping Scheduling: With Group Sweeping Scheduling (GSS), requests are served in cycles, in roundrobin manner. To reduce disk arm movements, the set of n streams is divided into g groups. Groups are served in fixed order. Individual streams within a group are served according to SCAN; therefore, it is not fixed at which time or order individual streams within a group are served. In one cycle, a specific stream may be the first to be served; in another cycle, it may be the last in the same group. A smoothing buffer which is sized according to the cycle time and data rate of the stream assures continuity.
Multimedia Systems- M.Sc(IT)
202 Mixed Strategy: The mixed strategy was introduced based on the shortest seek(also called greedy strategy) and the balanced strategy. As shown in following figure, every time data are retrieved from disk they are transferred into buffer memory allocated for the respective data stream. From there, the application process removes them one at a time. The goal of the scheduling algorithm is: o To maximize transfer efficiency by minimizing seek time and latency. o To serve process requirements with a limited buffer space. With shortest seek, the first goal is served, i.e., the process of which data block is closest is served first. The balanced strategy chooses the process which has the least amount of buffered data for service because this process is likely to run out of data. The crucial part of this algorithm is the decision of which of the two strategies must be applied (shortest seek or balanced strategy). For the employment of shortest, seek two criteria must be fulfilled: the number of buffers for all processes should be balanced (i.e., all processes should nearly have the same number of buffered data) and the overall required bandwidth should be sufficient for the number of active processes, so that none of them will try to immediately read data out of an empty buffer. The urgency is introduced as an attempt to measure both. The urgency is the sum of the reciprocals of the current fullness (amount of buffered data). This number measures both the relative balance of all read processes and the number of read processes. If the urgency is large, the balance strategy will be used; it is small, it is safe to apply the shortest seek algorithm.
Multimedia Systems- M.Sc(IT)
203
Check Your Progress 2 Distinguish additive and subtractive colors and write their area of use. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. ContinuousMedia File System: CMFS Disk scheduling is a non-preemptive disk scheduling scheme designed for the Continuous Media File System (CMFS) at UC-Berkeley. Different policies can be applied in this scheme. Here the notion of the slack time H is introduced. The slack time is the time during which CMFS is free to do non-real-time operations or workahead for real-time processes, because the current workahead of each process is sufficient so that no process would starve, even if it would not be served for H seconds. Several real time scheduling policies such as Static/Minimum policy, Greedy policy, Cyclical plan policy have been implemented and tested in prototype file system.
204 meaning that the page must be read from disk. Page faults affect the real-time performance very seriously, so they must be avoided. A possible approach is to lock code and/or data into real memory. However, care should be taken when locking code and/or data into real memory. Device Management Device management and the actual access to a device allow the operating system to integrate all hardware components. The physical device is represented by an abstract device driver. The physical characteristics of devices are hidden. In a conventional system, such devices include a graphics adapter card, disk, keyboard and mouse. In
multimedia systems, additional devices like cameras, microphones, speakers and dedicated storage devices for audio and video must be considered. In most existing multimedia systems, such devices are not often integrated by device management and the respective device drivers. Existing operating system extensions for multimedia usually provide one common system-wide interface for the control and management of data streams and devices.
205
23.10 References
1. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 2. "Multimedia:Concepts and Practice" By Stephen McGloughlin 3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.
Multimedia Systems- M.Sc(IT)
206
24.4 Data Analysis 24.5 Data Structure for Multimedia Databases 24.6 Relational Database Model 24.7 Multimedia Objects storage model 24.8 Different Architectures for Multimedia Databases 24.9 Let us sum up 24.10 Lesson-end activities 24.11 Model answers to Check your progress 24.12 References
24.1 Introduction
Multimedia database systems are database systems where, besides text and other discrete data, audio and video information will also be stored, manipulated and retrieved. To provide this functionality, multimedia database systems require a proper storage technology and five systems. Current storage technology allows for the possibility of reading, storing and writing audio and video information in real-time. This can happen either through dedicated external devices, which have long been available, or through system integrated secondary storage. The external devices were developed for studio and electronic entertainment applications; they were not developed for storage of discrete media. An example is a video recorder controlled through a digital interface.
Multimedia Systems- M.Sc(IT)
207
that a DBMS should be able to manipulate data even after the changes of the surrounding programs. Consistent View of Data In multi-user systems it is important to provide a consistent view of data during processing database requests at certain points. This property is achieved using time synchronization protocols. Security of Data Security of data and integrity protection in database in case of system failure is one of the most important requirements DBMS. This property is provided using the transaction concept. Query and Retrieval of Data Different information (entries) is stored in databases, which later can be retrieved through database queries. Database queries are formulated with query languages such as SQL.
Multimedia Systems- M.Sc(IT)
208
209 7. Relational Consistency of Data Management Relations among data of one or different media must stay consistent corresponding to their specification. The MDBMS manages these relations and can use them for queries and data output. Therefore, for example, navigation through a document is supported by managing relations among individual parts of a document. 8. Real-time Data Transfer The read and write operations of continuous data must be done in real-time. The data transfer of continuous data has a higher priority than other database management actions. Hence, the primitives of a multimedia operating system should be used to support the real-time transfer of continuous data. 9. Long Transactions The performance of a transaction in a MDBMS means that transfer of a large amount of data will take a long time and must be done in a reliable fashion. An example of a long transaction is the retrieval of a movie. In the architecture model, the system components around MDBMS and MDBMS itself have the following functions: The operating system provides the management interface for MDBMS to all local devices. The MDBMS provides an abstraction of the stored data and their equivalent devices, as is the case in DBMS without multimedia. The communication system provides for MDBMS abstractions for communication with entities at remote computers. These communication abstractions are specified through interfaces according to, for example, the Open System Interconnection (OSI) architecture. A layer above the DBMS, operating system and communication system can unify all these different abstractions and offer them, for example, in an object-oriented environment such as a toolkit. Thus, an application should have access to each abstraction at different levels.
210 2. How can these data be accessed? That is to say, how are the proper defined to access multimedia entries. One can define media-dependent, as well as media-independent operations. In a next step, a class hierarchy with respect to object-oriented programming may be implemented. Advantages of multimedia database
Following are advantages to store multimedia objects in the database: Better security: Multimedia objects are secured in the databases and can be invoked any time when required. For example, a doctor can see M.R.I. report of patient whenever he wants and can be preserved as long as the patient takes the treatment of doctor. Greater control (resizing, manipulating): Multimedia objects can be resized when required and certain changes can be made when required. Easy deletion: Images can be deleted without deleting the corresponding data from the database. One can search for multimedia content in the same fashion as they search for traditional relational data. For instance, customers can search for images by a unique image name or key, by photographer, or by category. In addition, customers can use Oracle interMedias powerful image content based retrieval capability to search for similar images recursively through the database. And once a customer has found an image of interest, they can simply click on it and see the full resolution image. Easy to extract statistics on usage Oracle interMedia enables Oracle9i to manage multimedia content (image, audio, and video) in an integrated fashion with other enterprise information. It extends Oracle9i reliability, availability, and data management to multimedia content in media-rich Internet applications. As an integral part of the Oracle9 i database server, Oracle interMedia data benefits from all Oracle9i capabilities, including its speed, efficiency, scalability, security, and power. Check Your Progress 1 List a few advantages of Multimedia Databases. Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson.
Multimedia Systems- M.Sc(IT)
211
3. The descriptive data may include information for layout and logical structuring of the text or keywords. Images can be stored in databases using the following forms: 1. Pixels (pixel matrix) represent raw data. A compressed image may also consist of a transformed pixel set. 2. The registering data include the height and width of the picture. Additionally, the details of coding are stored here.
212 Student (Admission_Number Name Picture Exercise_Device_1 Exercise_Device_2 Integer, String, Image, Video, Video) Athletics (Admission_Number Qualification The_High_Jump The_Mile_Run Integer, Integer, Video, Video) Swimming (Admission_Number Crawl Integer, Video) Analysis (Qualification Error_Pattern Comment
Integer, String, Audio) Type 1 Relational Model In the type 1 Relational Model, the value of a certain attribute can be fixed over the particular set of the corresponding attribute types, e.g., the frame rate of motion video can be fixed. Type 2 Relational Model A variable number of entries can be defined through the type 2 relational model. Type 3 Relational Model In addition to the fixed values of attributes per relation and the variable number of entries, an entry can simultaneously belong to several relations. This property is called the type 3 relational model.
213 Object metadata and methods are always stored in the database. Whether multimedia content is stored inside or outside the database, Database manages metadata for all the multimedia object types, and automatically extracts that metadata for each type. This metadata includes the following: Data storage information including the source types, location, and name. Data update time and format. MIME media type (used in Web and mail applications). Image height and width, and image content length, format, and compression type. Audio encoding type, number of channels, sampling rate, sample size, compression type, play time (duration), and description. Video frame widths and heights, frame resolution and rate, play time(duration), number of frames, compression type, number of colors, bit rate, and description . Select application metadata (for example, singer or studio names).
This architecture is question-answer system. The result of query given by user is multimedia presentation containing the answer. The following figure illustrates this architecture:
query
Multimedia presentation Natural language generator Presentation generator Query Processor Player Ontology agent MM DB1 MM DB2 Symantic generator DB2 ontology DB1 ontology fact text
Multimedia Systems- M.Sc(IT)
214 24.8.2 SQL+D architecture SQL+D an extension to SQL, which allows users to dynamically specify how to display answers to queries posed to multimedia databases. It provides tools to display multimedia data plus other traditional GUI elements such as boxed text, checkbox, list, and button. The version of SQL+D, includes: The full implementation of the Database Interface, allowing users to connect local and remote ODBC (or JDBC) compliant database, such as ORACLE or Microsoft Access. Simplified display specifications syntax and instantiation of display elements. SQL+D differs from other efforts in that it is specifically designed for querying multimedia databases. It emphasizes in the query, by-the-user specification of the display of the output data. In contrast, others have focused on specification of the query, data visualization, or data browsing. SQL+D allows all of these, and we have one browsers and visual querying applications as a proof of concept. By proposing SQL+D as a language extension to SQL, we intend to maintain the flexibility that allowed SQL to be adopted as the query language of choice by a great number of database management systems and browsers, as well as by many programming languages that allow embedded SQL queries. 24.8.3 Entity-Multimedia-Relationship model (EMRM) The EMRM model can be designed with traditional entities and their relationship set along with the additional set called multimedia set. Multimedia sets are shown in this model as entities and can be connected to the normal entities using relationship. For example, one can design the EMRM with description of the entities and relationships as
given below: The entity Document is very important in the system. As a new media it contains multimedia objects. The cardinality between Document and Media Object is one to many, which means a Document has at least one media object and there is no blank Document. The media object can exist without Technique Report so the cardinality between Media Object and Document is 0 to many. A media object can belong to multiple Documents. Different media entities Text, Picture, Audio and Video have the (t, d) classification hierarchy with the Media Object. Each media is a subclass of Media Object and they are disjoint with each other. In the system two kinds of user groups are anticipated which are Reader and Author entities. A generic entity Person is defined to include the common attributes of Reader and Author. It does make sense that a person can be the Author and Reader at the same time so the classification hierarchy is (p, o) which means a Person can be either a Reader or an Author or both. The ternary relationship between Reader and Document is defined by connections Read and Readby. A Reader can exist without any Document and the Document can have no any reader
Multimedia Systems- M.Sc(IT)
215 at all. Also the Reader can read unlimited number of Document and unlimited number of Reader can read the Document so the cardinalities are from 0 to many. Another ternary relationship is defined between the Author and the Document as Write and Writtenby relationships. The difference is that a Document must have atleast 1 Author while an author can write 0 to many Document The last entity is Publisher. The ternary relationship addresses the connection between Publisher and Document. The Publisher can be an organization such as ACM, a department like or a laboratory. The Document can be conjointly published by some organizations so the cardinality is from 1 to n.
i) Better security ii) Greater control (resizing, manipulation) iii) Easy deletion iv) Easy to extract statistics on usage
Multimedia Systems- M.Sc(IT)
216
24.12 References
1. Multimedia Servers: Applications, Environments and Design By Dinkar Sitaram, Asit Dan 2. Semantic Models for Multimedia Database Searching and Browsing By ShuChing Chen, Rangasami L. Kashyap, Arif Ghafoor 3. "Multimedia:Concepts and Practice" By Stephen McGloughlin 4. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.
Multimedia Systems- M.Sc(IT)
217
25.1 Introduction
Synchronization in multimedia systems refers to the temporal relations between media objects in the multimedia system. In a more general and widely used sense some authors use synchronization in multimedia systems as comprising content, spatial and temporal relations between media objects. Process management deals with the resource main processor. The capacity of this resource is specified as processor capacity. The process manager maps single processes onto resources according to a specified scheduling policy such that all processes meet their requirements.
UNIX and its variants, Microsofts Windows-NT, Apples System 7 and IBMs OS/2TM, are, and will be, the most widely installed operating systems with multitasking capabilities on personal computers (including the Power PC) and workstations. Although some are enhanced with special priority classes for real-time processes, this is not
Multimedia Systems- M.Sc(IT)
218 sufficient for multimedia applications. For example, the UNIX scheduler which provides a real-time static priority scheduler in addition to a standard UNIX timesharing scheduler is analyzed. For this investigation three applications have been chosen to run concurrently; typing as an interactive application, video as a continuous media application and a batch program. The result was that only through trial and error a particular combination of priorities and scheduling class assignment might be found that works for a specific application set, i.e., additional features must be provided for the scheduling of multimedia data processing. Threads OS/2 was designed as a time-sharing operating system without taking serious realtime applications into account. An OS/2 thread can be considered as a light-weight process: it is the dispatchable unit of execution in the operating system. A thread belongs to exactly one address space (called process in OS/2 terminology). All threads share the resources allocated by the respective address space. Each thread has its own execution stack, register values and dispatch state (either executing or ready-to-run). Each thread belongs to one of the following priority classes: The time-critical class is reserved for threads that require immediate attention. The fixed-high class is intended for applications that require good responsiveness without being time-critical. The regular class is used for the executing of normal tasks. The idle-time class contains threads with the lowest priorities. Any thread in this class is only dispatched if no thread of any other class is ready to execute. Priorities Within each class, 32 different priorities (0, , 31) exist. Through time-slicing, threads of equal priority have equal chances to execute. A context switch occurs whenever a thread tries to access an otherwise allocated resource. The thread with the highest priority is dispatched and the time-slice is started again. Threads are preemptive, i.e., if a higher-priority thread becomes ready to execute, the scheduler preempts the lower-priority thread and assigns the CPU to the higherpriority thread. The state of the preempted thread is recorded so that execution can be resumed later. Physical Device Driver as Process Manager In OS/2, applications with real-time requirements can run as Physical Device Drivers (PDD) at ring 0 (kernel mode). These PDDs can be made non-preemptable. An interrupt that occurs on a device (e.g., packets arriving at the network adapter) can be serviced from the PDD immediately. As soon as an interrupt happens on a device, the PDD gets control and does all the work to handle the interrupt. This may also include
Multimedia Systems- M.Sc(IT)
219 tasks which could be done by application processes running in ring 3 (user mode). The
task running at ring 0 should (but must not) leave the kernel mode after 4 msec. Operating system extensions for continuous media processing can be implemented as PDDs. In this approach, a real-time scheduler and the process management run as a PDD being activated by a high resolution timer. In principle, this is the implementation scheme of the OS/2 Multimedia Presentation Manager.TM, which represents the multimedia extension to OS/2.
220 MHEG Engine At the European Networking Center in Heidelberg, an MHEG engine has been developed. The MHEG engine is an implementation of the object layer. The architecture of the MHEG engine is shown in following figure. The Generic Presentation Services of the engine provide abstractions from the presentation modules used to present the content objects. The Audio/Video-Subsystem is a stream layer implementation. This component is responsible for the presentation of the presentation of the continuous media streams, e.g., audio/video streams. The User Interface Service provide the presentation of time-independent media, like text and graphics, and the processing of user interactions, e.g., buttons and forms. The MHEG engine receives the MHEG objects from the application. The Object Manager manages these objects in the run-time environment. The Interpreter processes the action objects and events. It is responsible for initiating the preparation and presentation of the objects. The Link Processor monitors the stated of objects and triggers links, if the trigger conditions of a link are fulfilled. delay action
delay action Delayed sequential actions Parallel lists of actions Lists of actions.
Multimedia Systems- M.Sc(IT)
221 The run-time system communicates with the presentation services by events. The User Interface Services provide events that indicate user actions. The Audio/VideoSubsystem provides events about the status of the presentation streams, like end of the presentation of a stream or reaching a cuepoint in a stream. 25.3.2 HyTime HyTime (Hupermedia/Time-based Structuring Language) is an international standard (ISO/IEC 10744) for the structured representation of hypermedia information. HyTime is an application of the Standardized GeneralMarkup Language (SGML). SGML is designed for document exchange, whereby the document structure is of great importance, but the layout is a local matter. The logical structure is defined by markup commands that are inserted in the text. The markups divide the text into SGML elements. For each SGML document, a Data Type Definition (DTD) exists which declares the element types of a document, the attributes of the elements and how the instances are hierarchically related. A typical use of SGML is the publishing industry where an author is responsible for the layout. As the content of the document is not restricted by SGML, elements can be of type text, picture or other multimedia data. HyTime defines how markup and DTDs can be used to describe the structure of hyperlinked time-based multimedia documents. HyTime does not define the format or Application MHEG Engine Object Manager Link Processor Interpreter Generic Presentation Services AV SubSystem User Interface Services Operating System Architecture of an MHEG engine.
Multimedia Systems- M.Sc(IT)
222 encoding of elements. It provides the framework for defining the relationship between these elements. Hytime supports addresses to identify a certain piece of information within an element, inking facilities to establish links between parts of elements and temporal and spatial alignment specifications to describe the relationships between media objects.
HyTime defines architectural forms that represent SGML element declaration templates with associated attributes. The semantic of these architectural forms is defined by HyTime. A HyTime application designer creates a HyTime-conforming DTD using the architectural forms he/she needs for the HyTime document. In the Hytime DTD each element type is associated with an architectural form by special HyTime attribute. The HyTime architectural forms are grouped into the following modules: The Base Module specifies the architectural forms that comprise that document. The Measurement Module is used to add dimensions, measurement and counting to the documents. Media objects in the document can be placed along with the dimensions. The Location Address Module provides the means to address locations in a document. The following addressing modes are supported: Name Space Addressing Schema: Addressing to a name identifying a piece of information. Coordinate Location Schema: Addressing by referring to an interval of a coordinate space if measuring along the coordinate space is possible. An example is to address to a part of an audio sequence. Semantic Location Schema: Addressing by using application-specific constructs. The Scheduling Module places media objects into Finite Coordinate Spaces (FCSs). These space are collections of application-defined axes. To add measures to the axes, the measurement module is needed. HyTime does not know the dimension of its media objects. The Hyperlink Module enables building link connections between media objects. Endpoints can be defined using the location address, measurements and scheduling modules. The Rendition Module is used to specify how the events of a source FCS, that typically provides a generic presentation description, are transformed to a target FCS that is used for a particular presentation.
Multimedia Systems- M.Sc(IT)
223 Check Your Progress 1 List the modules available in HyTime Architecture Notes: a) Write your answers in the space given below. b) Check your answers with the one given at the end of this lesson. HyTime Engine The task of a HyTime engine is to take the output of an SGML parser, to recognize architectural forms and to perform the HyTime-specific and applicationindependent processing. Typical tasks of the HyTime engine are hyperlink resolution, object addressing, parsing of measures and schedules, and transformation of schedules
and dimensions. The resulting information is then provided to the HyTime application. The HyTime engine, HyOctane, developed at the University Massachusetts at Lowell, has the following architecture: an SGML parser takes as input the application data type definition that is used for the document and the HyTime document instance. It stores the document objects markups and contents, as well as the applications DTD in the SGML layer of a database. The HyTime engine takes as input the information stored in the SGML layer of a database. It identifies the architectural forms, resolves addresses form the location address module, handles the functions of the scheduling module and performs the mapping specified in the rendition module. It stores the information about elements of the document that are instances of architectural forms in the HyTime layer of the database. The application layer of the database stores the objects and their attributes, as defined by the DTD. An application presenter gets the information it needs for the presentation of the database content, including the links between objects and the presentation coordinates to use for the presentation, from the database. 25.3.3 MODE The MODE (Multimedia Objects in a Distributed Environment) system, developed at the University of Karsruhe, is a comprehensive approach to network transparent synchronization specification and scheduling in heterogeneous distributed systems. The heart of MODE is a distributed multimedia presentation service which shares a customized multimedia object model, synchronization specifications and QoS requirements with a given application. It also shares knowledge about networks and workstations with a given run-time environment. The distributed service uses all this information for synchronization scheduling when the presentation of a compound
Multimedia Systems- M.Sc(IT)
224 multimedia object is requested from the application. Thereby, it adapts the QoS of the presentation to the available resources, taking into account a cost model and the QoS requirements given by the application. The MODE system contains the following synchronization-related components: The Synchronization Editor at the specification layer, which is used to create synchronization and layout specifications for multimedia presentations. The MODE Server Manager at the object layer, which coordinates the execution of the presentation service calls. This includes the coordination of the creation of units of presentation (presentation objects) out of basic units of information (information objects) and the transport of objects in a distributed environment. The Local synchronizer, which receives locally the presentation objects and initiates their local presentation according to a synchronization specification. The Optimizer, part of the MODE Server Manager, which performs the planning of the distributed synchronization and chooses presentation qualities and presentation forms depending on user demands, network and workstation capabilities and presentation performance. Synchronization Model In the MODE system, a synchronization model based on synchronization at reference points is used. This model is extended to cover handling of time intervals, objects of unpredictable duration and conditions which may be raised by the underlying
distributed heterogeneous environment. A synchronization specification created with the Synchronization Editor and used by the Synchronizer is stored in textual form. The syntax of this specification is defined in the context-free grammar of the Synchronization Description Language. This way, a synchronization specification can be used by MODE components, independent of their implementation language and environment. Local Synchronizer The Local Synchronizer performs Synchronized presentations according to the synchronization model introduced above. This comprises both intra-object and interobject synchronization. For intra-object synchronization, a presentation thread is created which manages the presentation of a dynamic basic object. Threads with different priorities may be sued to implement priorities of basic objects. All presentations of static basic objects are managed by a single thread. Synchronization is performed by signaling mechanism. Each presentation thread reaching a synchronization point sends a corresponding signal to all other presentation threads involved in the synchronization point. Having received such a signal, other presentation threads may perform acceleration actions, if necessary. After the dispatch of all signals, the presentation thread waits until it receives signals from all other
Multimedia Systems- M.Sc(IT)
225 participating threads of the synchronization point; meanwhile, it may perform a waiting action. Exceptions Caused by the Distributed Environment The correct temporal execution of the plan depends on the underlying environment, if the workstations and network provide temporal guarantees for the execution of the operations. Therefore, MODE provides several guarantee levels. If the underlying distributed environment cannot give full guarantees, MODE considers the possible error conditions. Three types of actions are used to define a behavior in the case of exception conditions, which may be raised during a distributed synchronized presentation: 1) A waiting action can be carried out if a presentation of a dynamic basic object has reached a synchronization point and waits longer than a specified time at this synchronization point. Possible waiting actions are, for example, continuing presentation of the last presentation object (freezing a video, etc.), pausing or cancellation of the synchronization point. 2) When a presentation of a dynamic basic object has reached a synchronization point and waits for other objects to reach this point, acceleration actions represent an alternative to waiting actions. They move the delayed dynamic basic objects to this synchronization point in due time. Priorities may be used for basic objects to reflect their sensitivity to delays in their presentation. For example, audio objects will usually be assigned higher priorities than video objects because a user recognizes jitter in an audio stream earlier than jitter in a video stream. Presentations with higher priorities are preferred over objects with lower priorities in both presentation and synchronization.
Browse the internet and search different implementations of multimedia concepts in Operating Systems. List the operating systems which include multimedia features.
226
25.7 References
1. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 2. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 3. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst, Verus Pronk__