Vous êtes sur la page 1sur 13

Linear video editing

Linear video editing is the process of selecting, arranging and modifying the images and
sound recorded on videotape whether captured by a video camera, generated from a
computer graphics program or recorded in a studio. Until the advent of computer-based
non-linear editing in the early 1990s "linear video editing" was simply called “video
editing.”
A Sony BVE-910 linear editing system's keyboard
[edit] History
The bulk of linear editing is done simply, with two machines and a device to control
Actual live television is still basically produced in the same manner as it was in the 1950s them. Many video tape machines are capable of controlling a second machine,
(although transformed by myriad technical advances). However, the only way of airing eliminating the need for an external editing control device.
the same shows again before videotape was introduced was by filming shows using a
kinescope (essentially, a video monitor paired with a movie camera). However,
This process is 'linear', rather than non-linear editing, as the nature of the tape-to-tape
kinescopes (the films of television shows) suffered from various sorts of picture
copying requires that all shots be laid out in the final edited order. Once a shot is on tape,
degradation, from image distortion and apparent scan lines to artifacts in contrast and loss
nothing can be placed ahead of it without overwriting whatever is there already. If
of detail. Also, kinescopes had to be processed and printed in a film laboratory, making
absolutely necessary material can be inserted by copying the edited content onto another
them somewhat dicey for broadcasts delayed for different time zones.
tape, however as each copy introduced generation-produced image degradation this is not
desirable.
So, the primary motivation for the development of videotape was as a short or long-term
archival medium. Only after a series of technical advances spanning decades did
One drawback of early video editing technique was that it was impractical to produce a
videotape editing finally become a viable production tool, up to par with film editing.
rough cut for presentation to an executive producer. Since executive producers are never
familiar enough with the material to be able to visualise the finished product from
The first widely accepted videotape in the United States was two inches wide and inspection of a decision list, they were deprived of the opportunity to voice their opinions
travelled at 15 inches per second. To gain enough head-to-tape speed, four video at a time when those opinions could be easily acted upon. Thus, particularly in
recording and playback heads were spun on a head wheel across most of the two-inch documentary television, video was resisted for quite a long time.
width of the tape. (Audio and synchronization tracks were recorded along the sides of the
tape with stationary heads.) This system was known as Quad, for quadruplex recording.
Video editing reached its full potential in the late 1970s when computer-controlled edit
See 2 inch Quadruplex videotape.
suite controllers were developed, which could orchestrate an edit based on an edit
decision list (EDL), using timecode to synchronize multiple tape machines and auxiliary
The resulting video tracks were slightly less than a ninety-degree angle (considering the devices. The most popular and widely used computer edit systems came from Sony,
vector addition of high-speed spinning heads tracing across the 15 inches per second Ampex and the venerable CMX. Systems such as these were expensive, (especially when
forward motion of the tape). considering auxiliary equipment like VTRs, video switchers and graphics generators) and
were usually limited to high-end post-production facilities.

Originally videotape was edited by physically cutting and splicing the tape, in a manner
similar to film editing. This was an arduous process and not widely performed. When it Jack Calaway of Calaway Engineering was the first to produce a lower-cost, PC-based,
was used, the two pieces of tape to be joined were painted with a solution of extremely "CMX-style" linear editing system which greatly expanded the use of linear editing
fine iron filings suspended in carbon tetrachloride. This exposed the magnetic tracks, so systems throughout the post-production industry. Following suit, other companies,
that they could be aligned in a splicer designed for this task. The tracks had to be cut including EMC and
during a vertical retrace, without disturbing the odd-field/even-field ordering. The cut
also had to be at the same angle that the video tracks were laid down on the tape. Also,
since the video and audio read heads were several inches apart, it was not possible to
make a physical edit that would appear correct in both video and audio. The cut was
made for video and a portion of audio then re-copied into the correct relationship (the
same technique as for editing 16mm film with a combined magnetic audio track).

The disadvantages of physically editing tapes were many: edited tapes could not be
reused (in an era when videotapes frequently were, because of their high unit cost); the
process required great skill, and often resulted in edits that would roll (lose sync); and,
each edit required several minutes to perform.
Strassner Editing Systems

The television show Rowan & Martin's Laugh-In was the first and possibly only TV
Strassner Editing Systems, came out with equally useful competing editing products.
show to make extensive use of this method.

While computer based non-linear editing has been adopted throughout most of the
A system for editing Quad tape "by hand" was developed by the 1960s. It was really just
commercial, film, industrial and consumer video industries, linear video tape editing is
a means of synchronizing the playback of two machines so that the signal of the new shot
still commonplace in television station newsrooms, and medium-sized production
could be "punched in" with a reasonable chance at success. One problem with this and
facilities which haven’t made the capital investment in newer technologies. News
early computer-controlled systems was that the audio track was prone to suffer artifacts
departments often still use linear editing because they can start editing tape and feeds
(i.e. a short buzzing sound) because the video of the newly-recorded shot would record
from the field as soon as received since no additional time is spent capturing material as
into the side of the audio track. A commercial solution known as "Buzz Off" was used to
is necessary in non-linear editing systems.
minimize this effect.

For more than a decade, computer-controlled Quad editing systems were the standard
post-production tool for television. Quad tape involved expensive hardware, time-
consuming setup, relatively long rollback times for each edit and showed misalignment
as disagreeable "banding" in the video. That said, it should be mentioned that Quad tape Non-linear editing system
has a better bandwidth than any smaller-format analogue tape, and properly handled
could produce a picture indistinguishable from that of a live camera.
Non-linear editing
When helical scan video recorders became the standard it was no longer possible to
physically cut the tape. At this point video editing became a process of using two video Non-linear editing for film and television postproduction is a modern editing method
tape machines, playing back the source tape (or raw footage ) from one machine and which involves being able to access any frame in a video clip with the same ease as any
copying just the portions desired on to a second tape (the edit master ). other. This method is similar in concept to the "cut and paste" technique used in film
editing from the beginning. However, when working with film, it is a destructive process,
as the actual film negative must be cut. Non-linear, non-destructive methods began to
appear with the introduction of digital video technology. It can also be viewed as the
audio/video equivalent of word processing, which is why it is called desktop editing in
the consumer space [1].
Video and audio data are first captured to hard disks or other digital storage devices. The Non-linear editing with computers as we know it today was first introduced by Editing
data is either recorded directly to the storage device or is imported from another source. Machines Corp. in 1989 with the EMC2 editor; a hard disk based non-linear off-line
Once imported they can be edited on a computer using any of a wide range of software. editing system, using half-screen resolution video at 15 frames per second. A couple of
For a comprehensive list of available software, see List of video editing software, weeks later that same year, Avid introduced the Avid/1, the first in the line of their Media
whereas Comparison of video editing software gives more detail of features and Composer systems. It was based on the Apple Macintosh computer platform (Macintosh
functionality. II systems were used) with special hardware and software developed and installed by
Avid. The Avid/1 was not the first system to introduce modern concepts in non-linear
editing, however, such as timeline editing and clip bins -- both of which were pioneered
In non-linear editing, the original source files are not lost or modified during editing. in Lucasfilm's EditDroid in the early 1980s.
Professional editing software records the decisions of the editor in an edit decision list
(EDL) which can be interchanged with other editing tools. Many generations and
variations of the original source files can exist without needing to store many different The video quality of the Avid/1 (and later Media Composer systems from the late 80s)
copies, allowing for very flexible editing. It also makes it easy to change cuts and undo was somewhat low (about VHS quality), due to the use of a very early version of a
previous decisions simply by editing the edit decision list (without having to have the Motion JPEG (M-JPEG) codec. But it was enough to be a very versatile system for
actual film data duplicated). Loss of quality is also avoided due to not having to offline editing, to revolutionize video and film editing, and quickly become the dominant
repeatedly re-encode the data when different effects are applied. NLE platform.

Compared to the linear method of tape-to-tape editing, non-linear editing offers the In October 1990 NewTek introduced Video Toaster, a hardware and software solution for
flexibility of film editing, with random access and easy project organization. With the edit the Commodore Amiga 2000 computer system, taking advantage of the video-friendly
decision lists, the editor can work on low-resolution copies of the video. This makes it aspects of that system's hardware to deliver the product at an unusually low cost ($1499).
possible to edit both standard-definition broadcast quality and high definition broadcast The hardware component was a full-sized card which went into the Amiga's unique
quality very quickly on normal PCs which do not have the power to do the full processing single video expansion slot rather than the standard bus slots, and therefore could not be
of the huge full-quality high-resolution data in real-time. used with the A500 and A1000 models. The card had several BNC connectors in the rear,
which accepted four video input sources and provided two outputs (preview and
program). This initial generation system was essentially a real-time four-channel video
The costs of editing systems have dropped such that non-linear editing tools are now switcher.
within the reach of home users. Some editing software can now be accessed free as web
applications; some, like Cinelerra (focused on the professional market) and Blender3D,
can be downloaded free of charge; and some, like Microsoft's Windows Movie Maker or For the second generation NewTek introduced the Video Toaster Flyer. The Flyer was a
Apple Computer's iMovie, come included with the appropriate operating system. far more capable Non-linear editing system. In addition to processing live video signals,
the Flyer made use of hard drives to store video clips and audio, and allowed complex
scripted playback. The Flyer was capable of simultaneous dual-channel playback, which
A computer for non-linear editing of video will usually have a video capture card to allowed the Toaster's Video switcher to perform transitions and other effects on Video
capture analog video and/or a FireWire connection to capture digital video from a DV clips without the need for rendering.
camera, with its video editing software. Modern web based editing systems can take
video directly from a camera phone over a GPRS or 3G mobile connection, and editing
can take place through a web browser interface, so strictly speaking a computer for video The hardware component was again a card designed for the Amiga's Zorro 2 expansion
editing does not require any installed hardware or software beyond a web browser and an slot, and was primarily designed by Charles Steinkuehler. The Flyer portion of the Video
internet connection. Toaster/Flyer combination was a complete computer of its own, having its own
Microprocessor and Embedded software, which was written by Marty Flickinger. Its
hardware included three embedded SCSI controllers. Two of these SCSI buses were used
Various editing tasks can then be performed on the imported video before it is exported to to store video data, and the third to store audio. The hard drives were thus connected to
another medium, or MPEG encoded for transfer to a DVD. the Flyer directly and used a proprietary filesystem layout, rather than being connected to
the Amiga's buses and were available as regular devices using the included DOS driver.
The Flyer used a proprietary Wavelet compression algorithm known as VTASC, which
[edit] History
was well regarded at the time for offering better visual quality than comparable Motion
JPEG based non-linear editing systems.
The first truly non-linear editor, the CMX 600, was introduced in 1971 by CMX Systems,
a joint venture between CBS and Memorex. It recorded & played back black-and-white
Until 1993, the Avid Media Composer could only be used for editing commercials or
analog video recorded in "skip-field" mode on modified disk pack drives the size of
other small content projects, because the Apple Macintosh computers could access only
washing machines. These were commonly used to store data digitally on mainframe
50 gigabytes of storage at one time. In 1992, this limitation was overcome by a group of
computers of the time. The 600 had a console with 2 monitors built in. The right monitor,
industry experts led by Rick Eye a Digital Video R&D team at the Disney Channel. By
which played the preview video, was used by the editor to make cuts and edit decisions
February 1993, this team had integrated a long form system which gave the Avid Media
using a light pen. The editor selected from options which were superimposed as text over
Composer Apple Macintosh access to over 7 terabytes of digital video data. With instant
the preview video. The left monitor was used to display the edited video. A Digital PDP-
access to the shot footage of an entire movie, long form non-linear editing (Motion
11 computer served as a controller for the whole system. Because the video edited on the
Picture Editing) was now possible. The system made its debut at the NAB conference in
600 was in black and white and in low-resolution "skip-field" mode, the 600 was suitable
1993, in the booths of the three primary sub-system manufacturers, Avid, Silicon
only for offline editing.
Graphics and Sony. Within a year, thousands of these systems replaced a century of
35mm film editing equipment in major motion picture studios and TV stations world
Various approximations of non-linear editing systems were built in the '80s using wide, making Avid the undisputed leader in non-linear editing systems for over a decade.
computers coordinating multiple laser discs, or banks of VCRs. One example of these
tape & disc-based systems was Lucasfilm's EditDroid, which used several laserdiscs of
Although M-JPEG became the standard codec for NLE during the early 1990s, it had
the same raw footage to simulate random-access editing (a compatible system was
drawbacks. Its high computational requirements ruled out software implementations,
developed for sound post production by Lucasfilm called SoundDroid--one of the earliest
leading to the extra cost and complexity of hardware compression/playback cards. More
digital audio workstations).
importantly, the traditional tape workflow had involved editing from tape, often in a
rented facility. When the editor left the edit suite he could take his confidential video
The term "nonlinear editing" or "non-linear editing" was formalized in 1991 with the tapes with him. But the M-JPEG data rate was too high for systems like Avid on the Mac
publication of Michael Rubin's Nonlinear: A Guide to Digital Film and Video Editing and Lightworks on PC to store the video on removable storage, so these used fixed hard
(Triad, 1991) -- which popularized this terminology over other language common at the disks instead. The tape paradigm of keeping your (confidential) content with you was not
time, including "real time" editing, "random-access" or "RA" editing, "virtual" editing, possible with these fixed disks. Editing machines were often rented from facilities houses
"electronic film" editing, and so on. The handbook has remained in print since 1991, on a per-hour basis, and some productions chose to delete their material after each edit
currently in its 4th edition (Triad, 2000). session, and then recapture it the next day, in order to guarantee the security of their
content. In addition, each NLE system had storage limited by its hard disk capacity.

Computer processing advanced sufficiently by the end of the '80s to enable true digital
imagery, and has progressed today to provide this capability in personal desktop These issues were addressed by a small UK company, Eidos plc (which later became
computers. famous for its Tomb Raider video game series). Eidos chose the new ARM-based
computers from the UK and implemented an editing system, launched in Europe in 1990
at the International Broadcasting Convention. Because it implemented its own
An example of computing power progressing to make non-linear editing possible was compression software designed specifically for non-linear editing, the Eidos system had
demonstrated in the first all-digital non-linear editing system to be released, the "Harry" no requirement for JPEG hardware and was cheap to produce. The software could decode
effects compositing system manufactured by Quantel in 1985. Although it was more of a multiple video and audio streams at once for real-time effects at no extra cost. But most
video effects system, it had some non-linear editing capabilities. Most importantly, it significantly, for the first time, it allowed effectively unlimited quantities of cheap
could record (and apply effects to) 80 seconds (due to hard disk space limitations) of removable storage. The Eidos Edit 1, Edit 2, and later Optima systems allowed the editor
broadcast-quality uncompressed digital video encoded in 8-bit CCIR 601 format on its to use any Eidos system, rather than being tied down to a particular one, and still keep his
built-in hard disk array. data secure. The Optima software editing system was closely tied to Acorn hardware, so
when Acorn stopped manufacturing the Risc PC in the late 1990s, Eidos stopped selling
the Optima system; by this time Eidos had become predominantly a games company.
In the early 1990s a small American company called Data Translation took what it knew Video servers usually offer some type of control interface allowing them to be driven by
about coding and decoding pictures for the US military and large corporate clients and more sophisticated scheduling or play-listing applications. Popular protocols include
threw $12m into developing a desktop editor which would use its proprietary VDCP and 9-Pin Protocol.
compression algorithms and off-the-shelf parts. Their aim was to 'democratize' the
desktop — and take some of Avid's market. In August 1993 Media 100 entered the
market and thousands of would-be editors had a low-cost, high-quality platform to use. They can optionally allow recording using the same codec that is used in various editing
software packages to prevent any wasted time in transcoding.

Inspired by the success of Media 100, members of the Premiere development team left
Adobe to start a project called "Keygrip" for Macromedia. Difficulty raising support and Broadcast automation
money for development led the team to take their non-linear editor to NAB. After various
companies made offers, Keygrip was purchased by Apple as Steve Jobs wanted a product
In the TV broadcast industry, a server is a used to store broadcast quality images and
to compete with Adobe Premiere in the desktop video market. At around the same time,
allows several users to edit stories using the images they contain simultaneously.
Avid — now with Windows versions of its editing software — was considering
abandoning the Macintosh platform. Apple released Final Cut Pro in 1999, and despite
not being taken seriously at first by professionals, it has evolved into a serious competitor The video server can be used in a number of contexts, some of which include:
to Avid.

Another leap came in the late 1990s with the launch of DV-based video formats for  News: providing short news video clips as part of a news broadcast as seen
consumer and professional use. With DV came IEEE 1394 (FireWire/iLink), a simple and on networks like CNN and Fox News).
inexpensive way of getting video into and out of computers. The video no longer had to  Production: enhance live events with instant replays and slow motion and
be converted from an analog signal to digital data — it was recorded as digital to start highligths (sport production) (see OB Vans)
with — and FireWire offered a straightforward way of transferring that data without the
need for additional hardware or compression. With this innovation, editing become a  Instruction: delivering course material in video format.
more realistic proposition for standard computers with software-only packages. It enabled  Public Access: delivering city specific information to residents over a cable
real desktop editing producing high-quality results at a fraction of the cost of other system.
systems.
 Surveillance: deliver real-time video images of protected sites.

More recently the introduction of highly compressed HD formats such as HDV has
 Entertainment: deliver film trailers or music videos.
continued this trend, making it possible to edit HD material on a standard computer
running a software-only editing application. [edit] Features

Avid is still considered the industry standard, with the majority of major feature films, Typically, a video server can do the following:
television programs, and commercials created with its NLE systems. Avid products were
used in the creation of every film nominated in the Best Picture, Directing, Film Editing,
Sound Editing, Sound Mixing, Visual Effects, and Animated Feature categories of the
2005 Academy Awards. Avid systems were also the overwhelming NLE choice of the
 Ingest of different sources : video cameras (multiple angles), satellite data
2004-2005 Primetime Emmy Award nominees, being used on more than 50 shows in feeds, disk drives and other video servers. This can be done in different
eleven major categories. Final Cut Pro continues to develop a strong following, and the codecs.
software received an Technology & Engineering Emmy Award in 2002.[2]  Temporary or definitive storage of these video feeds.
 Maintain a clear structure of all stored media with approriate metadata to
Avid has held on to its market-leading position, but faces growing competition from allow fast search : name, remarks, rating, date, time code, etc.
other, cheaper software packages, notably Adobe Premiere in 1992, and later Final Cut  video editing of the different clips
Pro in 1999. These three competing products by Avid, Adobe, and Apple are the foremost
NLEs, often referred to as the A-Team[3].  Transfer those clips to other video servers or playout direclty (via IP
interface or SDI)

[edit] Quality
Generally, they have several bi directional channels (record and ingest) for video and
audio. A perfect synchronisation is necessary between those channels to manage the
One of the primary concerns with non-linear editing has always been picture and sound feeds.
quality. The need to compress and decompress video leads to some loss in quality. While
improvements in compression techniques and disk storage capacity have reduced these
concerns, they still exist. Most professional NLEs are able to edit uncompressed video [edit] Video Surveillance
with the appropriate hardware.
In the surveillance context, an IP video server converts analog video signals into IP
With the more recent adoption of DV formats, quality has become an issue again: DV's video streams. The IP video server can stream digitized video over IP networks in the
compression means that manipulation of the image can introduce significant degradation. same way that an IP Camera can. Because an IP Video server uses IP protocols, it can
However this can be partially avoided by rendering DV footage to a non-compressed stream video over any network that IP can use, including via a modem for access over a
intermediary format, thereby avoiding quality loss through recompression of the modified phone or ISDN connection. With the use of a video server attached to an analog camera,
video images. Ultimately it depends on what changes are made to the image; simple edits the video from an existing surveillance system can be converted and networked into a
should show no degradation; however, effects that alter the colour, size or position of new IP surveillance system.
parts of the image will have a more negative effect.
In the video security industry a video server is a device to which one or more video
sources can be attached. Video servers are used to give existing analog systems network
connectivity. Video servers are essentially transmission/ telemetry / monitoring devices.
Viewing is done using a web browser or in some cases supplied software. These products
also allow the upload of images to the internet or direct viewing from the internet. In
Video server order to upload to the internet an account with an ISP (internet service provider) may be
required. Manufacturers of video servers [1] include well known names in the global
security industry such as Sony, Honeywell Security, Bosch Security, Axis
A video server is a computer based device (also called a 'host') dedicated to delivering
Communications and Videor Technical.
video.

Unlike PCs or Macs, both being multi-application devices, a video server is designed for
one purpose; provisioning video, often for broadcasters. A professional grade video
server records, stores, and plays back multiple streams of video without any degradation Time code
in the video signal. Broadcast quality video servers often store hundreds of hours of
compressed audio and video (in different codecs), play out multiple and synchronised
simultaneous streams of video, and offer quality interfaces such as SDI for digital video A time code is a sequence of numeric codes generated at regular intervals by a timing
and XLR for balanced analog audio or AES/EBU digital audio and also Time Code. A system. Time codes are used extensively for synchronization, and for logging material in
genlock input is usually provided to provide a means of synchronizing with the house recorded media. SOM is also a related term (in the broadcast industry) and stands for
reference clock, thereby avoiding the need for timebase correction and frame 'Start of Message' or 'Start of Media' also known as Time Code (TC) in. Similarly EOM
synchronization. stands for 'End of Message' or 'End of Media' also known as Time Code (TC) out.
Often two sets of bar targets are provided: one for colorbars at 75% amplitude and one
for colorbars at 100% amplitude. The 100% bars represent the maximum amplitude (of
the composite signal) that composite encoding allows for. 100% bars are not suitable for
Vectorscope broadcast and are not broadcast safe. 75% bars have reduced amplitude and are broadcast
safe.
From Wikipedia, the free encyclopedia
Jump to: navigation, search In some vectorscope models, only one set of bar targets is provided. The vectorscope can
be setup for 75% or 100% bars by adjusting the gain so that the color burst vector extends
to the "75%" or "100%" marking on the graticule.

The reference signal used for the vectorscope's display is the color burst that is
transmitted before each line of video, which for NTSC is defined to have a phase of 180°,
corresponding to the nine-o'clock position on the graticule. The actual color burst signal
shows up on the vectorscope as a straight line pointing to the left from the center of the
graticule. In the case of PAL, the color burst phase alternates between 135° and 225°,
resulting in two vectors pointing in the half-past-ten and half-past-seven positions on the
graticule, respectively. In digital (and component analog) vectorscopes, colorburst doesn't
exist; hence the phase relationship between the colorburst signal and the chroma
subcarrier is simply not an issue. A vectorscope for SECAM uses a demodulator similar
to the one found in a SECAM-receiver to retrieve the U and V colour signals since they
A video vectorscope displaying color bars. The diagonal direction of the color burst are transmitted one at a time (Thomson 8300 Vecamscope).
vector is indicative of a PAL signal.
On older vectorscopes implemented with CRTs, the graticule was often implemented as a
silkscreened overlay which was superimposed over the front surface of the CRT. One
notable exception was the Tektronix WFM601 series of instruments, which are combined
waveform monitors/vectorscopes used to measure CCIR 601 television signals. The
waveform-mode graticules of these instruments is implemented with a silkscreen;
whereas the vectorscope graticule (consisting only of bar targets, as this family did not
support composite video) was drawn on the CRT by the electron beam. Modern
instruments have graticules drawn using computer graphics, and both graticule and trace
are rendered on an external VGA monitor or an internal VGA-compatible LCD display.

Most modern waveform monitors include vectorscope functionality built in; and many
allow the two modes to be displayed side-by-side. The combined device is typically
referred to as a waveform monitor, and standalone vectorscopes are rapidly becoming
obsolete.

[edit] Audio Applications

In audio applications, a vectorscope is used to measure the difference between channels


of stereo audio signals. One stereo channel drives the horizontal deflection of the display,
and the other drives the vertical deflection. A monoaural signal, consisting of identical
The graticule of an NTSC vectorscope. left and right signals, results in a straight line with a slope of positive one. Any stereo
separation is visible as a deviation from this line, creating a Lissajous figure. If the
straight line (or deviation from it) appears with a slope of negative one, this indicates that
A vectorscope is a special type of oscilloscope used in both audio and video applications. the left and right channels are 180° out of phase.
Whereas an oscilloscope or waveform monitor normally displays a plot of signal vs. time,
a vectorscope displays an X-Y plot of two signals, which can reveal details about the
relationship between these two signals. Vectorscopes are highly similar in operation to 1. Step 1
oscilloscopes operated in X-Y mode; however those used in video applications have
specialized graticules, and accept standard television or video signals as input Know that there are 2 parts to a vectorscope--the scale or graticule against
(demodulating and demultiplexing the two components to be analyzed internally). which you do your measurement and the trace which represents the
chrominance, or color portion of the video signal. The vectorscope's display
[edit] Video Applications is created by decoding the chrominance portion of the video signal into 2
components, B-Y, blue less luminance, and R-Y, red subtracting luminance.
These 2 signals are then plotted horizontally and vertically on the scale. B-Y
In video applications, a vectorscope supplements a waveform monitor for the purpose of is on the horizontal axis. R-Y is on the vertical axis. Colors are displayed as
measuring and testing television signals, regardless of format (NTSC, PAL, SECAM or vectors between them, hence the name of the instrument.
any number of digital television standards). While a waveform monitor allows a
broadcast technician to measure the overall characteristics of a video signal, a
vectorscope is used to visualize chrominance, which is encoded into the video signal as a 2. Step 2
subcarrier of specific frequency. The vectorscope locks exclusively to the chrominance
subcarrier in the video signal (at 3.58 MHz for NTSC, or at 4.43 MHz for PAL) to drive Calibrate your vectorscope before using it. The point in the center of the
its display. In digital applications, a vectorscope instead plots the Cb and Cr channels scale is the reference for centering the trace. In REF mode, you’ll generate a
against each other (these are the two channels in digital formats which contain chroma circle with your trace that you’ll match the circle of your display. NTSC
information). color subcarrier, known as burst, should be rotated until it rests on the 9
o’clock line. Burst is the lime green 3.58 MHz color reference signal that is
A vectorscope uses an overlaid circular reference display, or graticule, for visualizing sent out with every line of video. If you put a professional monitor in
chrominance signals, which is the best method of referring to the QAM scheme used to underscan, you’ll see it as a green stripe on the left side of your picture to the
encode color into a video signal. The actual visual pattern that the incoming chrominance right of the blacker than black synch pulse.
signal draws on the vectorscope is called the trace. Chrominance is measured using two
methods—color saturation, encoded as the amplitude, or gain, of the color red, subcarrier 3. Step 3
signal, and hue, encoded as the subcarrier's phase. The vectorscope's graticule roughly
represents saturation as distance from the center of the circle, and hue as the angle, in
standard position, around it. The graticule is also embellished with several elements Determine the relative strength of each color once your signal is calibrated.
corresponding to the various components of the standard color bars video test signal, The center is reserved for black or white. The relative signal strength of each
including boxes around the circles for the colors in the main bars, and perpendicular lines color can be readily seen by seeing its distance from the center.
corresponding to the U and V components of the chrominance signal (and additionally on
an NTSC vectorscope, the I and Q components). NTSC vectorscopes have one set of
boxes for the color bars, while their PAL counterparts have two sets of boxes, because the 4. Step 4
R-Y chrominance component in PAL reverses in phase on alternating lines. Another
element in the graticule is a fine grid at the nine-o'clock, or -U position, used for
measuring differential gain and phase. Turn on your SMPTE color bars. With a clean signal source, the output of
bars should rest within the center of all the labeled boxes. If your source is
on tape, you’ll detect a little jitter and see a blurred image. To match the
phase or tint of 2 or more different cameras, turn each source to bars, and on
program output of your switcher adjust the phase pots on the camera control
units until all of the bars land in their appropriate boxes.

5. Step 5

Check to see if camera white balances properly. Choose the correct filter.
Light with the “white” light that you’ll use. Open the iris up until the
waveform monitor reaches 100 IRE. Activate white balance and see if all the
vectors retreat into the center. Most cameras have auto black balance. Cap it
and the same thing should happen. If the camera has a color shift, check that
the filter matches the light source. If the signal doesn’t shrink down to the
center, your camera might need repair.

6. Step 6

Protect against over saturation. Video equipment is designed to work within


broadcast specs, but as the line between computers and broadcast video gear
blurs, it’s easy to create graphics that simply have too much color for
transmitters and televisions to handle. In some cases the over saturation is
easy to see. The color will extend outside the vectorscope’s circle. Strictly Figure 3-1a. The vectorscope screen shows a display of a SMPTE color bars test signal
speaking, any part of the picture with over 75% saturation is illegal for with all of the dots in their boxes and the color burst correctly placed on the horizontal
NTSC. On your vectorscope those target boxes represent the color limit. axis.

7. Step 7

Know the legal exceptions. Sometimes a highly saturated color can slip
through. If you find a color that goes past 75% turn to the waveform monitor
in the flat, or luminance plus color position. If your color plus luminance
together is under 121 IRE, you should be able to get by with it.

Basic Video Testing -- Vectorscope Techniques

In the section on waveform monitor techniques, it was explained that a waveform


monitor displays a graph of the video signal -- amplitude (voltage) on the vertical axis
and time on the horizontal axis. A vectorscope also graphs portions of the video signal for
test and measurement purposes, but the vectorscope display differs from a waveform
display.

Next: Setting up a Genlocked Studio

Table of Contents

While a waveform monitor displays amplitude information about all parts of the signal, a
vectorscope displays information about only the chrominance (coloring) portion of the Figure 3-1b. The vector dots are rotated with respect to their boxes, indicating a chroma
video signal -- it does not respond to other parts of the video signal. phase error.

There are two important parameters of the chrominance signal that may suffer distortions All vectorscope graticules are designed to work with a color bars signal. Remember, the
leading to noticeable picture problems. These are amplitude (gain), and phase (timing). color bars signal consists of brightness information (luminance) and high-frequency color
Amplitude is an independent measurement and can actually be made with a waveform information (chrominance or chroma).
monitor. Phase is the relationship between two signals -- in this case, the relationship
between the chrominance signal and the reference burst on the video. The processing
within a vectorscope and the display of the processed signals is designed to readily detect Each bar of the color bars signal creates a dot on the vectorscope's display. The position
and evaluate both phase and gain distortions of the chrominance. of these dots relative to the boxes, or targets, on the graticule and the phase of the burst
vector are the major indicators of the chrominance (color) signal's health. (Burst is a
reference packet of subcarrier sine waves that is sent on every line of video.)
Understanding the Display

The graticule
There are two parts to a vectorscope display -- the graticule and the trace. The graticule is
a scale that is used to quantify the parameters of the signal under examination. Graticules
may be screened either onto the faceplate of the CRT itself (internal graticule) or onto a The graticule is usually a full circle, with markings in 2- and 10-degree increments. The
piece of glass or plastic that fits in front of the CRT (external graticule). They can also be cross point in the center is the reference mark for centering the trace. Within the circle
electronically generated. The trace represents the video signal itself and is electronically you also see six target shapes each containing smaller, sectioned shapes. The smaller
generated by the demodulated chrominance signals (Figure 3-1). shapes are where each dot of the color bars signal should fall if the chroma gain and
phase relationships are correct. (In practice a dot that will fall entirely within the smaller
target is only created by a very low noise signal, such as directly from a color bars
generator. Expect the dots to be much larger and "fuzzier" on signals from tape players or
off-air receivers. Note also the camera signal in Figure 3-2.)
cameras, use of this control also allows more visibility of the detail in the white and black
information near the center of the vectorscope display.

The input controls typically consist of the input channel, reference, and mode selections.
The input for channel A, for example, is likely to be video from a switcher's output (with
a VTR, camera, signal generator or other equipment providing the input to the switcher).
Reference may be internal, where the vectorscope locks to the color burst of the selected
input channel, or external, where lock is to a second signal. In this section we are only
using internal reference. Externally referencing a vectorscope is imperative for the
purpose of matching the chroma phase of various sources to eliminate color shifts at
edit points. This application is discussed in the Setting Up a Genlocked Studio section.

Internally referenced displays only show the phase relationship between the burst and
color bars signal of the selected input signal itself -- not phase relationships between
different signals.

If the vectorscope has an external X,Y input for displaying audio, a mode control allows
you to enable this input for display with or without the regular vector display.

The phase control is one you will use often. In the internally referenced mode it must be
used to align the burst vector on the horizontal axis (pointing toward zero degrees, or 9
o'clock). If the color burst is not in this correct position, the signal is not properly aligned
with the graticule and provides no really useful information. As you rotate the phase
Figure 3-2. Proper white balance of a video camera is indicated by a fuzzy spot centered control, the whole trace rotates about the center point.
on the vectorscope display when the camera is viewing a white object.
Checking the Display
The horizontal line bisecting the circle at zero and 180 degrees (3 o'clock to 9 o'clock
positions, respectively) is used as a reference for correctly positioning the burst display.
(Burst is indicated by the portion of the trace that extends from the center part way When you first check the display of the color bars signal on a vectorscope, you should
toward the 180 degree position in Figure 3-2.) The markings toward the left of the burst see:
display indicate correct burst amplitude for 75% or 100% color bars. Note that burst does
not change electrical amplitude -- the gain in the vectorscope's processing is increased 1. A bright dot in the center of the display.
when it is set to display 75% amplitude bars in the targets -- and that increased gain
causes the display of the fixed amplitude burst signal to be longer.
2. Color burst straight out on the horizontal axis extending to the 75% mark.

At the left edge and near the outer circumference of the graticule is a grid used to
measure differential gain and phase. These measurements are discussed in the later There are also a few things you should examine on the waveform monitor. The waveform
section on Intermediate Video Testing. monitor should indicate the following:

Some vectorscopes also have square boxes outside the graticule's circle in the lower-left 1. Waveform black level at 7.5 IRE. (Black level adjustments are made with
and upper-right corners of the screen. These boxes, along with the vertical line that the setup control on the equipment under test while viewing the waveform
bisects the circle, are used for making stereo audio gain and phase measurements with a monitor.)
Lissajous display. While this capability is useful in certain applications, we will not
discuss it in this booklet, but rather concentrate on video measurements. 2. Waveform luminance (white bar) at 100 IRE. (Amplitude adjustments are
made with the video level, or video gain control on the equipment under test
The trace while viewing the waveform monitor.) Refer to the Waveform Monitor
Techniques section, "A word of caution" if there is more than one video gain
or level control in the path you're testing.
The vectorscope's display is created by decoding the chrominance portion of the video
signal into its two components, B-Y (blue minus luminance) and R-Y (red minus
luminance). These two signals are then plotted against each other in an X-Y fashion, with Using the test signal generator's color bars as a reference signal, you should check to
B-Y on the horizontal axis and R-Y on the vertical axis. While waveform monitors lock make certain the signals appear on the waveform monitor and vectorscope as mentioned
to sync pulses in the video signal, vectorscopes lock to the 3.58 MHz color burst and use above. Again, use the phase control to ensure the color burst vector is properly positioned
it as the phase reference for the display. and the vector color dots are in their graticule boxes. With this reference position
established, any differences in other signals you select will be obvious.

Three color bars signals are commonly used with vectorscopes -- 75% and 100% full
field color bars and the SMPTE color bars. The 100% and 75% labels refer to the Checking Chrominance Phase
amplitude of the signal, not the saturation of the colors; both are 100% saturated. In this
application note, we will use the SMPTE color bars signal exclusively, because it is an Now that you've established a reference position on the vectorscope, the next step for
industry standard. SMPTE bars are 75% amplitude color bars. checking the signal from the studio VTR equipment against the reference signal is to play
back a videotape with the 75% color bars recorded. Select the equipment under test on
You should always use 75% bars for basic testing. This is because 100% bars contain the video switcher (VTR, TBC or proc amp), or by appropriately connecting cables, and
signal levels too high to pass through some types of equipment undistorted, even when look at the vectorscope display.
the equipment is operating properly. For more information on the various color bars
signals, refer to the Basic Video Testing -- Waveform Monitor Techniques section. If the burst phase vector lines up on the horizontal axis, but the dot pattern is rotated with
respect to the boxes, there is a chroma phase problem with the equipment under test
Using the Vectorscope Controls (Figure 3-1b).

The first rule-of-thumb to remember when you begin to operate the vectorscope is that This rotation of the dot pattern means that the chrominance phase is incorrect relative to
the controls do not in any way affect the video signal itself. The only way to adjust the the color burst. This phasing error causes hues to be wrong -- people's faces appear green
signal, rather than just its vectorscope display, is with the controls of the equipment or purple, for example.
providing or transferring the video signal.
Correct chrominance phase is critical, even if hue errors aren't obvious on a picture
All the basic controls for powering up the instrument and adjusting the display for monitor. To correct a chrominance phase error, perform the following steps:
comfortable viewing (power and display: focus, scale illumination, and intensity) are
self-explanatory. The gain controls provide a way to calibrate the display's gain for either 1. Ensure the vectorscope's burst display is aligned with the horizontal axis
75% or 100% color bars. A variable control expands the trace to compensate for low (phase control on the vectorscope).
signal levels, or so you can analyze the signal in greater detail. For white balancing
2. Adjust the hue control or chroma phase control on the equipment in the path Checking White Balance
to get the dots as close to their respective graticule boxes as possible. They
may be too near the center of the display or too far away, but they will lie in
the proper direction from the center when phase is correct. Along with checking for chroma phase and gain, there is another studio application that
uses a vectorscope in the internal reference mode -- checking the white balance of video
cameras.
If you are working with a camera in your system, the adjustments are no different. You
just have to look at the cameras output, or at the output of the camera control unit (CCU)
if you are using one, when you make the adjustments. White balance is the process of balancing the camera's red, green and blue channels.
When these channels are properly balanced, the camera's output signals will reproduce
whites in a scene without adding color. In order for the camera to reproduce colors
It is often best to go back and forth between the chrominance gain (discussed next) and correctly, it must be able to reproduce white accurately. The signals from the green, blue,
phase adjustments to get the dots in their graticule boxes. The dots will rarely be exactly and red sensing elements in the camera must be properly balanced to ensure there is no
centered on the cross points in the smaller boxes. It is, however, acceptable if they fall chrominance signal at the camera output when it is viewing white.
somewhere within the small boxes rather than the larger boxes.

Before you can set white balance, you must first check the camera for proper black
Remember, these tests are only checking the bar signal relative to the color burst. Not all balance:
equipment requires or makes available this adjustment. Phasing (or timing) several
sources or paths so their signals may be switched or mixed requires external reference for
the vectorscope and somewhat different techniques. These are discussed in the Setting Up 1. Connect the video camera signal to the vectorscope input, or switch to the
a Genlocked Studio section . camera on the video switcher.

2. Cap the cameras lens.


Checking Chrominance Gain (Amplitude)
If the black balance is correct, the camera's signal will produce a fuzzy spot
Adjusting chroma phase alone may not put the dots in their boxes because they may fall in the center of the vectorscope display, the same as for proper white balance
short of, or extend too far outside the boxes (Figure 3-3). This indicates a chroma gain (Figure 3-2). If the spot is distended away from the center in any direction,
error. proceed as follows:
3. Adjust the black balance control on the camera until the spot returns to the
center of the display.

To check white balance, uncap the camera and point it at a pure white target. If white
balance is correct, the resulting signal from the camera should again produce a fuzzy spot
in the center of the vectorscope display (Figure 3-2). If any color channel in the camera is
out of balance, the fuzzy spot on the vectorscope will be distended or moved toward the
corresponding color's graticule box. For example, too much red signal moves the spot
toward the red box.

If the vectorscope display indicates incorrect color balance, it can be corrected by using
the camera's white balance controls. Cameras that allow manual white balance adjustment
usually have two controls (red gain and blue gain). It may help to remember the red
balance control will move the spot vertically and the blue control will move it
horizontally. To manually set white balance, the following steps are recommended:

1. Aim the camera at a white reference. (Before attempting to correct a white


balance problem, make sure you have the correct filter selected on the
camera for the type of lighting in use. The wrong filter selection will cause
an apparent white balance problem on the vectorscope.)

2. Adjust the red and blue gain controls on the camera to get the fuzzy spot in
the center of the graticule on the vectorscope.

Figure 3-3. This display shows the vector dots extending beyond their boxes, indicating a
chroma gain error. Some cameras with an automatic white balance feature may not offer manual adjustment
and will have to be referred to a service technician if the automatic balancing feature does
not achieve the desired results.
For checking chroma gain on a VTR, you will again use the color bars signal recorded at
the head of the tape -- other equipment may require a bars signal as its input. Again, you
should check luminance levels on the waveform monitor, and correct any errors, before Conclusion
you make any measurements with the vectorscope. (Remember the vectorscope does not
display any information about luminance.)
This section has dealt with three basic uses of a vectorscope -- evaluation of chrominance
to burst phase, chrominance gain, and color balancing of a camera. There are other, more
1. Use the setup control (if provided) to position the black level at 7.5 IRE. extensive, measurements that you can make with a simple vectorscope, and others that
require instruments with enhanced capabilities. Several of these more advanced
techniques are addressed in the following sections: Setting Up a Genlocked Studio and
2. Use the video level controls on the equipment under test to adjust the white Intermediate Video Testing.
level on the waveform monitor to 100 IRE.

While the tape is playing the recorded bars, dots will appear on the
vectorscope. If the dots are beyond the boxes, the chroma amplitude is too
high. If the dots fall short of the boxes, chrominance is too low.
3. Adjust the chroma gain control of the equipment under test until the dots fall
into their boxes. DV

If adjusting the chroma gain control doesn't get the dots into their boxes, you may need to Digital Video (DV) is a digital video format created by Sony, JVC, Panasonic and other
re-adjust hue or have the equipment serviced. video camera producers, and launched in 1995. Its smaller tape form factor MiniDV has
since become a standard for home and semi-professional video production; it is
sometimes used for professional purposes as well, such as filmmaking and electronic
If you happen to run the tape past the color bars signal and into active video, you'll notice news gathering (ENG). The DV specification (originally known as the Blue Book, current
that the vector display looks very different. This is because there are typically no large official name IEC 61834) defines both the codec and the tape format. Features include
areas of primary or secondary colors in the picture to create the bar dots. The display intraframe compression for uncomplicated editing and good video quality, especially
looks fuzzy or blurred because the vector shows the blend of hues within the picture. compared to earlier consumer analog formats such as Video8, Hi8 and VHS-C. DV now
enables filmmakers to produce movies inexpensively, and is strongly associated with
independent film and citizen journalism[citation needed].
The high quality of DV images, especially when compared to Video8 and Hi8 which error detection, and error correction (approx 8.7 Mbit/s) amounts in all to roughly 36
were vulnerable to an acceptable amount of video dropouts and “hits", prompted the megabits per second (approx 35.382 Mbit/s). At equal bitrates, DV performs somewhat
acceptance by mainstream broadcasters of material shot on DV. The low costs of DV better than the older MJPEG codec, and is comparable to intraframe MPEG-2.[4] DCT
equipment and their ease of use put such cameras in the hands of a new breed of compression is lossy, and sometimes suffers from artifacting around small or complex
videojournalists. Programs such as TLC’s Trauma: Life in the E.R. and ABC News’ objects such as text. The DCT compression has been specially adapted for storage onto
Hopkins: 24/7 were shot on DV. CNN’s Anderson Cooper is perhaps the best known of tape. The image is divided into macroblocks, each consisting of 4 luminance DCT blocks
the generation of reporter/videographers who began their professional careers shooting and 1 chrominance DCT block. Furthermore 6 macroblocks, selected at positions far
their own stories. away from each other in the image, are coded into a fixed amount of bits. Finally, the
information of each compressed macroblock is stored as much as possible into one sync-
block on tape. All this makes it possible to search video on tape at high speeds, both
There have been some variants on the DV standard, most notably Sony's DVCAM and forward and reverse, as well as to correct faulty sync blocks very well.
Panasonic's DVCPRO formats targeted at professional use. Sony's consumer Digital8
format is another variant, which is similar to DV but recorded on Hi8 tape. Other formats
such as DVCPRO50 utilize DV25 encoders running in parallel. [edit] Chroma subsampling

MiniDV tapes can also be used to record a high-definition format called HDV in cameras The chroma subsampling is 4:1:1 for NTSC, 4:1:1 for DVCPRO PAL, and 4:2:0 for other
designed for this codec, which differs significantly from DV on a technical level as it PAL. Not all analog formats are outperformed by DV. The Betacam SP format, for
uses MPEG-2 compression. MPEG-2 is more efficient than the compression used in DV, example, can still be desirable because it has slightly greater chroma fidelity and no
in large part due to inter-frame/temporal compression. [1] This allows for higher resolution digital artifacts.
at bitrates similar to DV. On the other hand, the use of inter-frame compression can cause
motion artifacts and complications in editing.[2] Nonetheless, HDV is being widely
adopted for both consumer and professional purposes and is supported by many editing Low chroma resolution is a reason why DV is sometimes avoided in applications where
applications using either the native HDV format or intermediary editing codecs.[3] chroma-key will be used. Nevertheless, advances in keying software (i.e. the combination
of chroma keying with different keying techniques,[5] chroma interpolation[6]) allows for
reasonable quality keys from DV material.
[edit] Technical standards

[edit] Audio
[edit] Before compression

DV allows either 2 digital audio channels (usually stereo) at 16-bit resolution and 48 kHz
To avoid aliasing, optical low pass filtering is necessary (although not necessarily sampling rate, or 4 digital audio channels at 12-bit resolution and 32 kHz sampling rate.
implemented in all camera designs). Essentially, blurry glass is used to add a small blur to For professional or broadcast applications, 48 kHz is used almost exclusively. In addition,
the image. This prevents high-frequency information from getting through and causing the DV spec includes the ability to record audio at 44.1 kHz (the same sampling rate used
aliasing. Low-quality lenses can also be considered a form of optical low pass filtering. for CD audio), although in practice this option is rarely used. DVCAM and DVCPRO
both use locked audio while standard DV does not. This means that at any one point on a
DV tape the audio may be +/- ⅓ frame out of sync with the video. However, this is the
Before arriving at the codec compression stage, light energy hitting the sensor is
maximum drift of the audio/video sync; it is not compounded throughout the recording.
transduced into analog electrical signals. These signals are then converted into digital
In DVCAM and DVCPRO recordings the audio sync is permanently linked to the video
signal by an analog to digital converter (ADC or A/D). This signal is then processed by a
sync.
digital signal processor (DSP) or custom ASIC and undergoes the following processes:

[edit] Connectivity
Processing of raw input into (linear) RGB signals: For Bayer pattern-based sensors
(i.e. sensors utilizing a single CCD or CMOS and color filters), the raw input has to be
demosaiced. For Foveon-based designs, the signal has to be processed to remove cross- The FireWire (aka IEEE 1394) serial data transfer bus is not a part of the DV
talk between the color layers. In pixel-shifted 3CCD designs, a process somewhat similar specification, but co-evolved with it. Nearly all DV cameras have an IEEE 1394 interface
to de-mosaicing is applied to extract full resolution from the sensor. and analog composite video and Y/C outputs. High end DV VTRs may have additional
professional outputs such as SDI, SDTI or analog component video. All DV variants have
a timecode, but some older or consumer computer applications fail to take advantage of
Matrix (for colorimetry purposes): The digital values are matrixed to tweak the camera's
it. Some camcorders also feature a USB2 port for computer connection, but these are
color response to improve color accuracy and to make the values appropriate for the
sometimes not capable of capturing the DV stream in full detail, and are instead used
target colorimetry (SMPTE C, Rec. 709, or EBU phosphor chromaticities). For
primarily for transferring certain digital data from the camcorder such as still pictures and
performance reasons, this matrix may be applied after gamma correction and combined
computer-format video files (such as MPEG4-encoded video). This carries Audio Video
with the matrix that converts R'G'B' to Y'CbCr.
and Control Signals.

Electronic white balance may be applied in this matrix, or in the matrix operation
On computers, DV streams are usually stored in container formats such as MOV, MXF,
applied after gamma correction.
AVI or Matroska.

Gamma correction: Gamma correction is applied to the linear digital signal, following
[edit] Physical format
the Rec. 601 transfer function (a power function of 1/0.45). Note that this increases the
quantization error from before.

Matrix (R'G'B' to Y'CbCr conversion): This matrix converts the gamma-corrected R'G'B'
values to Y'CbCr color space. Y'CbCr color space facilitates chroma subsampling. This
operation introduces yet more quantization error, in large part due to a difference in the
scale factors between the Y' and Cb and Cr components.

Chroma Subsampling: Since human vision has greater acuity for luminance than color,
performance can be optimized by devoting greater bandwidth to luminance than color.
Chroma subsampling approximates this by converting R'G'B' values into Y'CbCr color
space. The Cb and Cr color difference components are stored at a lower resolution than
the Y' (luma) component.
DV cassettes
Sharpening is often used to counteract the effect of optical low pass filtering. Sharpening Left to right: DVCAM-L, DVCPRO-M, MiniDV
can be implemented via finite impulse response filters.
The DV format uses "L-size" cassettes, while MiniDV cassettes are called "S-size". Both
The data is now compressed using one of several algorithms including discrete cosine MiniDV and DV tapes can come with a low capacity embedded memory chip (MIC)
transform (DCT), adaptive quantization (AQ), and variable-length coding (VLC). (most commonly, a scant 4 Kbit for MiniDV cassettes, but the system supports up to 16
Kbit). This embedded memory can be used to quickly sample stills from edit points (for
example, each time the record button on the camcorder is pressed when filming, a new
[edit] Video compression "scene" timecode is entered into memory). DVCPRO has no "S-size", but an additional
"M-size" as well as an "XL-size" for use with DVCPRO HD VTRs. All DV variants use a
tape that is ¼ inch (6.35 mm) wide.
DV uses DCT intraframe compression at a fixed bitrate of 25 megabits per second
(25.146 Mbit/s), which, when added to the sound data (1.536 Mbit/s), the subcode data,
[edit] MiniDV [edit] Digital 8

After the MiniDV and DV specifications were introduced, Sony backported DV encoding
scheme to 8-mm camcorders, creating Digital8. This allowed recording 40 minutes of DV
video onto one-hour Video8/Hi8 cassette. The 18-μm track on Digital8 tape was even
wider than on DVCAM cassette. Digital8 did not get as widespread acceptance as
MiniDV and Digital8 camcorders were discontinued by Sony in 2006.

[edit] DVCPRO

Panasonic specifically created the DVCPRO family for electronic news gathering (ENG)
use, with better linear editing capabilities and robustness. It has an even greater track
width of 18 micrometres and uses another tape type (Metal Particle instead of Metal
Evaporated). Additionally, the tape has a longitudinal analog audio cue track. Audio is
A size comparison between video formats
only available in the 16-bit/48 kHz variant, there is no EP mode, and DVCPRO always
Top to bottom: VHS, VHS-C, MiniDV
uses 4:1:1 color subsampling (even in PAL mode). Apart from that, standard DVCPRO
(also known as DVCPRO25) is otherwise identical to DV at a bitstream level. However,
commonly referred to as just DV[7]. The "L" cassette is about 120 × 90 × 12 mm and can unlike Sony, Panasonic chose to promote its DV variant for professional high-end
record up to 4.6 hours of video (6.9 hours in EP/LP). The better known MiniDV "S" applications.
cassettes, 65 × 48 × 12 mm and hold either 60 or 90 minutes of video (13 or 19.5
GB[citation needed]) depending on whether the video is recorded at Standard Play (SP) or
DVCPRO50 is often described as two DV-codecs in parallel. The DVCPRO50 standard
Extended Play (sometimes called Long Play) (EP/LP). 80 minute tapes that use thinner
doubles the coded video bitrate from 25 Mbit/s to 50 Mbit/s, and uses 4:2:2 chroma
tape are also available and can record 120 minutes of video in EP/LP mode. The tapes
subsampling instead of 4:1:1. DVCPRO50 was created for high-value ENG
sell for as little as US$2.50 each in quantity as of 2009. DV on SP has a helical scan track
compatibility. The higher datarate cuts recording time in half (compared to DVCPRO25),
, while EP uses a track width of only
width of 10 micrometres but the resulting picture quality is reputed to rival Digital Betacam. BBC preferred
DVCPRO50 camcorders over HDCAM cameras to film popular TV series, such as Space
6.7 micrometres. Since the tolerances are much Race (2005) and Rome (2006).[9][10]
tighter, the recorded tape may not play back
properly or at all on other devices. DVCPRO HD, also known as DVCPRO100, uses four parallel codecs and a recorded
video bitrate of 40-100 Mbit/s, depending on the format flavour. DVCPRO HD encodes
using 4:2:2 color sampling, compared to 4:2:0 or 4:1:1 for lower-bitrate video formats.
DVCPRO HD horizontally compresses recorded images to 960x720 pixels for 720p
output, 1280x1080 for 1080/59.94i or 1440x1080 for 1080/50i. This horizontal
compression is similar to but more significant than that of other HD formats such as
HDCam, HDV, AVCHD and AVCCAM. The final DCT compression ratio of DVCPRO
HD is approximately 6.7:1. To maintain compatibility with HDSDI, DVCPRO100
equipment upsamples video during playback. A camcorder using a special variable-
framerate (from 4 to 60 frame/s) variant of DVCPRO HD called VariCam is also
available. All these variants are backward compatible but not forward compatible.
DVCPRO-HD is codified as SMPTE 370M; the DVCPRO-HD tape format is SMPTE
371M, and the MXF Op-Atom format used for DVCPRO-HD on P2 cards is SMPTE
Canon ZR850 with three Sony MiniDV tapes 390M.

DVCPRO cassettes are always labeled with a pair of run times, the smaller of the two
being the capacity for DVCPRO50. A "M" tape can hold up to 66/33 minutes of video.
The color of the lid indicates the format: DVCPRO tapes have a yellow lid, longer "L"
tapes made specially for DVCPRO50 have a blue lid and DVCPRO HD tapes have a red
lid. The formulation of the tape is the same, and the tapes are interchangeable between
formats. The running time of each tape is 1x for DVCPRO, ½x for DVCPRO 50, ½x for
DVCPRO HD EX, and ¼x for DVCPRO HD, since the tape speed changes between
formats. Thus a tape made 126 minutes for DVCPRO will last approximately 32 minutes
in DVCPRO HD.
A disassembled MiniDV cassette.

[edit] Memory in cassette


Software is currently available for ordinary home computers which allows users to record
any sort of computer data on MiniDV cassettes using common DV decks or camcorders.
Though originally intended for the consumer market as a high-quality replacement for Some MiniDV cassettes have a small memory chip referred to as memory in cassette
VHS, L-size DV cassettes are largely nonexistent in the consumer market, and are (MIC). Cameras and recording decks can record any data desired onto this chip including
generally used only in professional settings. Even in professional markets, most DV a contents list, times and dates of recordings, and camera settings used. It is EEPROM
camcorders support only MiniDV, though many professional DV VTRs support both memory using the I²C protocol. The members of the MIC range are available in two
sizes of tape.[citation needed] forms:

[edit] DVCAM The MIC-R family


The MIC-R family works with serial EEPROM capacities between 1 Kbit
and 8 Kbit (the I²C-compatible EEPROM devices: M24C01, M24C02,
Sony's DVCAM is a professional variant of the DV standard that uses the same cassettes
M24C04 and M24C08).
as DV and MiniDV, but transports the tape 50% faster having 50% wider track, 15
The MIC-S family
micrometres instead of 10 micrometres. This variant uses the same codec as regular DV,
The MIC-S family works with serial EEPROM capacities greater or equal to
however, the wider track lowers the chances of dropout errors. The LP mode of consumer
16 Kbit (the XI2C-compatible EEPROM devices: M24C16, M24C32 and
DV is not supported. All DVCAM recorders and cameras can play back DV material, but
M24C64).
DVCPRO support was only recently added to some models like DSR-1800, DSR-2000,
DSR-1600. DVCAM tapes (or DV tapes recorded in DVCAM mode) have their
recording time reduced by one third. Both families are compliant with the DV standard. For detailed information see datasheet:
Serial I²C bus EEPROM (STMicroelectronics).
Because of the wider track, DVCAM has the ability to do a frame accurate insert tape
edit. DV will vary by a few frames on each edit compared to the preview. Another feature MIC functionality is not widely used on the consumer level; most tapes available to
of DVCAM is locked audio. If several generations of copies are made on DV, the audio consumers do not even include the chip, which adds substantially to the price of each
sync may drift. On DVCam this does not happen.[8] cassette. Most consumer equipment includes the circuitry to read and write to the chip
even though it is rarely used.
[edit] Other digital formats There are two types of DV-AVI files:

Digital video dates back to 1986, with the creation of the uncompressed D-1 format for
professional use (although several professional video manufacturers such as Sony,
 Type 1: The multiplexed Audio-Video is kept in its original multiplexing and
saved together into the Video section of the AVI file
Ampex, RCA, and Bosch had experimentally developed prototype digital video recorders
dating back to the mid-to-late 1970s). o Does not waste much space (audio is saved uncompressed, but
even uncompressed audio is tiny compared to the video part of
DV), but Windows applications based on the VfW API do not
Sony has several digital specifications for professional use, the most common for support it.
standard definition use being Digital Betacam, a distant descendant of the Betamax
products of the 1970s thru 1990s from which it received only some mechanical aspects,  Type 2: Like type 1, but audio is also saved as an additional audio stream
notably the form of the cassette. (Betamax itself descended from Sony's U-Matic ¾ inch into the file.
videocassette system.) o Supported by VfW applications, at the price of little increased
file size.
JVC's D-9 format (also known as Digital-S) is very similar to DVCPRO50, but records
on videocassettes in the S-VHS form factor. (NOTE: D-9 is not to be confused with D- Type 1 is actually the newer of the two types. Microsoft made the "type" designations,
VHS, which uses MPEG-2 compression at a significantly lower bitrate.) and decided to name their older VfW-compatible version "Type 2", which only furthered
confusion about the two types. In the late 1990s through early 2000s, most professional-
level DV software, including non-linear editing programs, only supported Type 1. One
The Digital8 standard uses the DV codec, but replaces the recording medium with the notable exception was Adobe Premiere, which only supported Type 2. High-end FireWire
older Hi8 videocassette. Digital8 theoretically offers DV's digital quality, without controllers usually captured to Type 1 only, while "consumer" level controllers usually
sacrificing playback of existing analog Video8/Hi8 recordings. However, in practice the captured to Type 2 only. Software is and was available for converting Type 1 AVIs to
maximum quality of the format is unlikely to be achieved, since Digital8 is largely Type 2, and vice-versa, but this is a time-consuming process.
confined to low-end consumer camcorders. It is also a semi-proprietary format, being
manufactured exclusively by Sony (although Hitachi also made Digital8 camcorders
briefly). Many current FireWire controllers still only capture to one or the other type. However,
almost all current DV software supports both Type 1 and Type 2 editing and rendering,
including Adobe Premiere. Thus, many of today's users are unaware of the fact that there
DVD was originally created as a distribution format, but recordable DVDs quickly are two types of DV AVI files. In any event, the debate continues as to which – Type 1 or
became available. Camcorders using miniDVD media are fairly common on the Type 2 – if either, is better.
consumer level, but due to difficulties with editing the MPEG-2 data stream, they are not
widely used in professional settings.
[edit] Mixing tapes from different manufacturers
Sony also made a format called MicroMV, which stored MPEG-2 video on a matchbox-
sized tape. Due to lack of platform support and the difficulties of editing MPEG-2 video, There is controversy over whether or not using tapes from different manufacturers can
MicroMV never became popular and was discontinued by 2005. lead to dropouts.[11][12][13] The problem theoretically occurs when incompatible lubricants
on tapes of different types combine to become tacky and deposit on tape heads. This
problem was supposedly fixed in 1997 when manufacturers reformulated their lubricants,
Ikegami's Editcam System can record in DVCPRO or DVCPRO50 format on a but users still report problems several years later. Much of the evidence relating to this
removable hard disk. issue is anecdotal or hearsay. In one case, a representative of a manufacturer
(unintentionally) provided incorrect information about their tape products, stating that
Panasonic's P2 system uses recording of DV/ DVCPRO/ DVCPRO50/ DVCPROHD one of their tape lines used "wet" lubricant instead of "dry" lubricant. [14] The issue is
streams in an MXF wrapper on PC card-compatible flash memory cards. complicated by OEM arrangements: a single manufacturer may make tape for several
different brands, and a brand may switch manufacturers.

Sony's XDCAM format allows recording of MPEG IMX, DVCAM and low resolution
streams in an MXF wrapper on a Sony ProDATA disc, an optical medium similar to a It is unclear whether or not this issue is still relevant, but as a general rule many DV
Blu-ray Disc. Sony has also, in cooperation with Panasonic, created AVCHD, a medium- experts recommend sticking with one brand of tape[citation needed].
independent codec for consumer high definition video; it is currently used on camcorders
containing hard disks, SD cards and DVD-R drives for storage.

[edit] Application software support Moving Picture Experts Group

Most DV players, editors and encoders only support the basic DV format, but not its
professional versions. DV Audio/Video data can be stored as raw DV data stream file The Moving Picture Experts Group (MPEG) was formed by the ISO to set standards
(data is written to a file as the data is received over FireWire, file extensions are.dv for audio and video compression and transmission.[1] Its first meeting was in May 1988 in
and.dif) or the DV data can be packed into AVI container files. The DV meta-information Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350
is preserved in both file types. members per meeting from various industries, universities, and research institutions.
MPEG's official designation is ISO/IEC JTC1/SC29 WG11.
Most Windows video software only supports DV packed into AVI containers, as they use
Microsoft's avifile.dll, which only supports reading avi files. A few notable exceptions
exist:

 Apple Inc.'s QuickTime Player: QuickTime by default only decodes DV to


half of the resolution to preserve processing power for editing capabilities.
However, in the "Pro" version the setting "High Quality" under "Show
Movie Properties" enables full resolution playback.
 DVMP Basic & DVMP Pro: full decoding quality. Plays AVI (inc
DVCPRO25 and DVCAM) and.dv files. Also displays the DV meta- Overview
information (e.g. timecode, date/time, f-stop, shutter speed, gain, white Compression methodology
balance etc)
 The VLC media player (Free software): full decoding quality
The MPEG compression methodology is considered asymmetric in that the encoder is
 MPlayer (also with GUI under Windows and Mac OS X): full decoding more complex than the decoder.[1] The encoder needs to be algorithmic or adaptive
quality whereas the decoder is 'dumb' and carries out fixed actions.[1] This is considered
 muvee Technologies autoProducer 4.0: Allows editing using FireWire IEEE advantageous in applications such as broadcasting where the number of expensive
1394 complex encoders are small but the number of simple inexpensive decoders is large. This
approach of the ISO to standardization in MPEG is considered novel because it is not the
 Quasar DV codec (libdv) - open source DV codec for Linux encoder which is standardized; instead, the way in which a decoder shall interpret the
bitstream is defined. A decoder which can successfully interpret the bitstream is said to be
compliant.[1] The advantage of standardizing the decoder is that over time encoding
Type 1 and Type 2 DV AVI files algorithms can improve yet compliant decoders will continue to function with them. [1]
The MPEG standards give very little information regarding structure and operation of the
encoder and implementers can supply encoders using proprietary algorithms.[2] This gives
scope for competition between different encoder designs which means that better designs
 MPEG-C: MPEG video technologies.
can evolve and users will have greater choice because of different levels of cost and
complexity can exist in a range of coders yet a compliant decoder will operate with them
all.[2]  MPEG-D: MPEG audio technologies.

MPEG also standardizes the protocol and syntax under which it is possible to combine or  MPEG-E: Multimedia Middleware.
multiplex audio data with video data to produce a digital equivalent of a television
program. Many such programs can be multiplexed and MPEG defines the way in which
such multiplexes can be created and transported. The definitions include the metadata
used by decoders to demultiplex correctly.[3]

Betacam
[edit] Standards

Betacam is a family of half-inch professional videotape products developed by Sony


The MPEG standards consist of different Parts. Each part covers a certain aspect of the from 1982 onwards. In colloquial use, "Betacam" singly is often used to refer to a
whole specification.[4] The standards also specify Profiles and Levels. Profiles are Betacam camcorder, a Betacam tape, a Betacam video recorder or the format itself.
intended to define a set of tools that are available, and Levels define the range of
appropriate values for the properties associated with them.[5] MPEG has standardized the
following compression formats and ancillary standards: All Betacam variants from (plain) Betacam to Betacam SP and Digital Betacam, use the
same shape cassettes, meaning vaults and other storage facilities do not have to be
changed when upgrading to a new format. The cassettes come in two sizes: S and L.
 MPEG-1: The first compression standard for audio and video. It was Betacam cameras can only load S tapes, while VTRs can play both S and L tapes. The
cassette shell and case for each Betacam cassette is colored differently depending on the
basically designed to allow moving pictures and sound to be encoded into
format, allowing for easy visual identification. There is also a mechanical key that allows
the bitrate of a Compact Disc. To meet the low bit requirement, MPEG-1
a video tape recorder to tell which format has been inserted.
downsamples the images, as well as uses picture rates of only 24-30 Hz,
resulting in a moderate quality.[6] It includes the popular Layer 3 (MP3)
audio compression format. The format supplanted the three-quarter inch U-Matic format, which Sony had introduced
in 1971. In addition to improvements in video quality, the Betacam configuration of an
integrated camera/recorder led to its rapid adoption by electronic news gathering
 MPEG-2: Transport, video and audio standards for broadcast-quality organizations.
television. MPEG-2 standard was considerably broader in scope and of
wider appeal – supporting interlacing and high definition. MPEG-2 is
considered important because it has been chosen as the compression scheme Even though Betacam remains popular in the field and for archiving, new digital products
for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV such as the Multi Access Video Disk Recorder are leading to a phasing out of Betacam
services like Dish Network, digital cable television signals, SVCD, and products in a studio environment.
DVD.[6]
Betacam / Betacam SP
 MPEG-3: Developments in standardizing scalable and multi-resolution
compression which would have become MPEG-3 were ready by the time The original Betacam format was launched in August 7, 1982. It is an analog component
MPEG-2 was to be standardized; hence, these were incorporated into video format, storing the luminance, "Y", in one track and the chrominance, on another as
MPEG-2 and as a result there is no MPEG-3 standard. [6] MPEG-3 is not to alternating segments of the R-Y and B-Y components performing Compressed Time
be confused with MP3, which is MPEG-1 Audio Layer 3. Division Multiplex, or CTDM.[1] This splitting of channels allows true broadcast quality
recording with 300 lines of horizontal luminance resolution and 120 lines chrominance
resolution (versus ~30 for Betamax/VHS), on a relatively inexpensive cassette based
 MPEG-4: MPEG-4 uses further coding tools with additional complexity to format.
achieve higher compression factors than MPEG-2.[7] In addition to more
efficient coding of video, MPEG-4 moves closer to computer graphics The original Betacam format records on cassettes loaded with oxide-formulated tape,
applications. In more complex profiles, the MPEG-4 decoder effectively which are theoretically the same as used by its consumer market-oriented predecessor
becomes a rendering processor and the compressed bitstream describes Betamax, introduced 7 years earlier by Sony in 1975. A blank Betamax-branded tape will
three-dimensional shapes and surface texture.[7] MPEG-4 also provides work on a Betacam deck, and a Betacam-branded tape can be used to record in a
Intellectual Property Management and Protection (IPMP) which provides the Betamax deck. However, in later years Sony discouraged this practice, suggesting that the
facility to use proprietary technologies to manage and protect content like internal tape transport of a domestic Betamax cassette was not well suited to the faster
digital rights management.[8] Several new higher-efficiency video standards tape transport of Betacam. In particular, the guide rollers tend to be noisy.
(newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video),
notably:
o MPEG-4 Part 2 (or Advanced Simple Profile) and Although there is a superficial similarity between Betamax and Betacam in that they use
the same tape cassette, they are really quite different formats. Betamax records relatively
o MPEG-4 Part 10 (or Advanced Video Coding or H.264).
low resolution composite video using a heterodyne color recording system and only two
MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, recording heads, while Betacam uses four heads to record in component format, at a
along with VC-1 and MPEG-2. much higher linear tape speed, resulting in much higher video and audio quality. A typical
L-750 length Betamax cassette that yielded about 3 hours of recording time on a Betamax
In addition, the following standards, while not sequential advances to the video encoding VCR at its B-II speed, only provided 30 minutes record time on a Betacam VCR or
standard as with MPEG-1 through MPEG-4, are referred to by similar notation: camcorder.

It may also be noted that Matsushita / Panasonic also introduced a professional 1/2"
 MPEG-7: A multimedia content description standard. component videotape format which used VHS style tape cassettes called "M-Format".
However, while Sony's Betacam system rapidly became an industry standard, M format
was a marketing failure. A followup format called M-II — effectively the Panasonic
 MPEG-21: MPEG describes this standard as a multimedia framework. enhancement of M-format as SP was Sony's enhancement of Betacam —was a great
improvement. Though it was used as an internal standard at NBC for a time, it failed to
make much headway in the marketplace. While technically M-II was in some ways an
Moreover, relatively more recently than other standards above, MPEG has started improvement over Betacam SP, Betacam SP had the overwhelming advantage of a high
following international standards; each of the standards holds multiple MPEG degree of compatibility with the existing (and very large) Betacam infrastructure.
technologies for a way of application. For example, MPEG-A includes a number of
technologies on multimedia application format. [9]
Betacam was initially introduced as a camera line along with a video cassette player. The
first cameras were the BVP-3, which utilized 3 saticon tubes, and the BVP1, which used
 MPEG-A: Multimedia application format. a single tri-stripe Trinicon tube. Both these cameras could be operated standalone, or with
their docking companion VTR, the BVV-1, (quickly superseded by the BVV-1A) to form
the BVW-1 (BVW-1A) integrated camcorder. Tapes could not be played back in camera
except in black and white for viewing in the camera's viewfinder only. Color playback
 MPEG-B: MPEG systems technologies. required the studio source deck at first, the BVW-10, which could not record, only play
back. It was primarily designed as a feeder deck for A/B roll edit systems, usually for
editing to a 1" Type C or 3/4" U-matic cassette edit master tape. There was also the
BVW-20 field playback deck, which was a portable unit with DC power and a handle, amount of weight. Eventually, non-docking camcorders became the most popular design
that was used to verify color playback of tapes in the field. Unlike the BVW-10, it did not by the mid-90s.
have a built in Time Base Corrector, or TBC.

The final analog Betacam SP camcorder was the BVW-600, which paired a camera front
With the popular success of the Betacam system as a news acquisition format, the line section very similar to the one on the DigiBeta DVW-700 to a BetaSP recorder. Like
was soon extended to include the BVW-15 studio player, and the BVW-40 Studio Edit every other Betacam camera system, and unlike the DigiBeta DVW-700, the camera
Recorder. The BVW-15 added Dynamic Tracking which enabled clear still frame and jog could not play back in color without the use of an outboard adapter.
playback, something the BVW-10 could not deliver. The BVW-40 enabled for the first
time editing to a Betacam master, and if setup and wired correctly, true component video
editing. It was also possible to do machine to machine editing between a BVW-10/15 and In the early 1990s a "pro" or "industrial" line of decks was introduced, with model
BVW-40 without an edit controller—a single serial cable between the units was all that numbers that echoed the naming conventions of Sony's 1970s era U-matic editing decks.
was required to control the player from the recorder in performing simple assemble and These were the PVW-2600 edit source feeder and the PVW-2800 edit recorder. These
insert editing. Additionally there were two field models introduced, the field recorder high quality machines primarily lacked the third and fourth audio channels of the BVW
BVW-25, and the BVW-21 play only portable field deck. series. In the mid-nineties, the far less expensive UVW series debuted. These machines
were considerably simpler, somewhat lower quality, and were designed primarily to be
used as companions to computer systems, and possessed very limited front panel
At its introduction, many insisted that Betacam remained inferior to the bulkier 1" Type C controls, no jog and shuttle; and with Time Base Corrector (TBC) control available only
and B recording, the standard broadcast production format of the late 70s to mid 80s. with an optional remote TBC controller. These were represented by the UVW-1800, a
Additionally, the maximum record time for both the cameras and studio recorders was very popular edit recorder, and the UVW-1200, UVW-1400 and UVW-1600 players.
only half an hour, a severe limitation in television production. There was also the
limitation that high quality recording was only possible if the original component signals
were available, as they would be in a cp. If they had already been converted to composite Betacam and Betacam SP tape cassette shells varied in color depending on the
video, re-converting them to components for recording and then eventually back to manufacturer. Many companies sold Betacam tapes, sometimes of their own
composite for broadcast, caused a severe drop in quality. manufacture, sometimes re-branded. Fuji, Maxell, Ampex and 3M were just some of the
major brands to do so.

Ampex, Thomson SA and Philips each sold rebranded OEM versions of some of the
Sony VTRs and Camcorders at various times in the 1980s and 1990s. Other than
nameplates, these models were identical to the Sony models.

[edit] Digital Betacam

A Betacam SP tape used by WSVN.

In 1986, Betacam SP was developed, which increased horizontal resolution to 340 lines. Digital Betacam L tape
While the quality improvement of the format itself was minor, the improvement to the
VTRs was enormous, in quality, features, and particularly, the new larger cassette with 90 Digital Betacam (commonly referred to as Digibeta, d-beta, dbc or simply Digi) was
minutes of recording time. Beta SP (for "Superior Performance") became the industry launched in 1993. It supersedes both Betacam and Betacam SP, while costing
standard for most TV stations and high-end production houses until the late 1990s. significantly less than the D1 format. S tapes are available with up to 40 minutes running
Despite the format's age Beta SP remains a common standard for video post-production. time, and L tapes with up to 124 minutes.
The recording time is the same as for Betacam, 30 and 90 minutes for S and L,
respectively. Tape speed is slightly slower in machines working in the 625/50 format,
increasing tape duration of one minute for every five minutes of run time. So, a 90 minute The Digital Betacam format records a DCT-compressed component video signal at 10-bit
tape will record 108 minutes of video in PAL. YUV 4:2:2 sampling in NTSC (720×486) or PAL (720×576) resolutions at a bitrate of
90 Mbit/s plus four channels of uncompressed 48 kHz / 20 bit PCM-encoded audio. A
fifth analog audio track is available for cueing, and a linear timecode track is also used on
Betacam SP is able to achieve its namesake "Superior Performance" over Betacam in the the tape. It is a popular digital video cassette format for broadcast use. Its main
fact that it uses metal-formulated tape, as opposed to Betacam's oxide tape. Sony competitor is the Panasonic DVCPRO50 cassette format[citation needed].
designed Betacam SP to be partially forward compatible with standard Betacam, with the
capability that Betacam SP tapes recorded on Betacam SP decks can be played in oxide-
era Betacam VTRs (such as the BVW-15 and BVW-40 mentioned earlier), but for Another key element which aided adoption was Sony's implementation of the SDI
playback only. Betacam SP-branded tapes cannot be used for recording in consumer coaxial digital connection on Digital Betacam decks. Facilities could begin using digital
Betamax VCRs like oxide Betacam tapes, due to Betacam SP's metal-formulation tape signals on their existing coaxial wiring without having to commit to an expensive re-
causing the video heads in a Betamax deck to wear prematurely, which are made of a installation.
softer material than the heads in a standard Betacam deck. However, Betacam SP tapes
can be used without a problem in ED Beta VCRs, since the ED Beta format uses metal-
formulated tape as well. Sony branded Digital Betacam videotape is sold in a blue-grey cassette container.

The new Betacam SP studio decks were the players, the BVW-60 and BVW-65, with [edit] Betacam SX
Dynamic Tracking and the Edit Recorders, the BVW-70, and the Dynamic Tracking
model, the BVW-75. The BVV-5 was the BetcamSP dockable camera back, which could
play back in color if its companion playback adapter was used. A new SP field recorder,
the BVW-35, possessed the added benefit of a standard RS422 serial control port that
enabled it to be used as an edit feeder deck. Though the four new studio decks could
utilize the full 90-minute Betacam SP cassettes, the BVW-35 remained limited to the
original Beta form factor 30-minute cassette shells. Answering a need for a basic office
player, Sony also introduced the BVW-22, a much less expensive desktop model that
could be used for viewing and logging 90-minute cassettes, but could not be configured
into an edit system.

Betacam SX S tape
Sony followed up the SP Field Recorder with the BVW-50, that could record and play the
large size 90 minute cassettes. After this, the deck line was relatively stagnant and
incredibly popular for a decade, aside from some specialty models that could record Betacam SX is a digital version of Betacam SP introduced in 1996, positioned as a
digital audio. cheaper alternative to Digital Betacam. It stores video using MPEG 4:2:2 Profile@ML
compression, along with four channels of 48 kHz 16 bit PCM audio. All Betacam SX
equipment is compatible with Betacam SP tapes. S tapes have a recording time up to 62
Until the introduction of the BVW-200 camera though, the camera and recorder minutes, and L tapes up to 194 minutes.
configuration was a docking system. The BVW-200 was an integrated camera recorder
system. It sacrificed the flexibility of a docking camera in order to lose a substantial
The Betacam SX system was very successful with newsgathering operations which had a
legacy of Betacam and Betacam SP tapes. Some Betacam SX decks, such as the DNW-
A75 or DNW-A50, can natively play and work from the analog tapes interchangeably,
because it contains both analog and digital playback heads.

Betacam SX uses MPEG-2 4:2:2P@ML compression, in comparison with other similar


systems that use 4:1:1 or 4:2:0 coding. It gives better chroma resolution and allows
certain postproduction processes such as Chroma-key.

This format compresses the video signal from approximately 180Mb/s to only 18Mb/s.
This means a compression ratio of around 10:1, which is achieved by the use of mild HDCAM SR Tape
temporal compression, where alternate frames are stored as MPEG I-frames and B- See also: HDCAM
frames, giving rise to an IBIB sequence on tape.

HDCAM, introduced in 1997, was the first HD format available in Betacam form-factor,
Together with Betacam SX, Sony introduced a generation of hybrid recorder, allowing using an 8-bit DCT compressed 3:1:1 recording, in 1080i-compatible downsampled
use of both tape and disk recording on the same deck, and high speed dubbing from one resolution of 1440×1080, and adding 24p and 23.976 PsF modes to later models. The
to another. This was intended to save wear on the video heads for studio applications, as HDCAM codec uses non-square pixels and as such the recorded 1440×1080 content is
well to speed up online editing. upsampled to 1920×1080 on playback. The recorded video bitrate is 144 Mbit/s. There
are four channels of AES/EBU 20-bit/48 kHz digital audio.
Betacam SX also features a good shot mark feature, that allows marking of each scene for
fast scanning of the tape, looking at recorded marks on each single cassette, and showing It is used for some of Sony's cinema-targeted CineAlta range of products (other CineAlta
the markers to the operator. devices use flash storage).

The cameras themselves are generally considered by most sound recordists to be quite HDCAM SR, introduced in 2003, uses a higher particle density tape and is capable of
noisy in operation, possibly because the amount of computer processing power, and recording in 10 bits 4:2:2 or 4:4:4 RGB with a bitrate of 440 Mbit/s. The "SR" stands for
subsequent generated heat, leads to cooling fans being used to keep the camera at a "Superior Resolution". The increased bitrate (over HDCAM) allows HDCAM SR to
reasonable temperature. capture much more of the full bandwidth of the HDSDI signal (1920×1080). Some
HDCAM SR VTRs can also use a 2× mode with an even higher bitrate of 880 Mbit/s,
allowing for a 4:4:4 RGB stream at a lower compression. HDCAM SR uses the new
Betacam SX tape shells are bright yellow.
MPEG-4 Part 2 Studio Profile for compression, and expands the number of audio
channels up to 12 at 48 kHz/24 bit.
Although Betacam SX machines have gone out of production, the format is still used by
many newsgathering operations, including CNN, Canada's CBC and CTV, San Diego's
HDCAM SR is used commonly for HDTV television production.
KFMB-TV and NBC's operations in the San Francisco Bay Area at KNTV and KSTS.
Many news archives still contain SX tapes.
Some HDCAM VTRs play back older Betacam variants, for example, the Sony SRW-
5500 HDCAM SR recorder, plays back and records HDCAM and HDCAM SR tapes and
[edit] MPEG IMX
with optional hardware also plays and upconverts Digital Betacam tapes to HD format.
Tape lengths are the same as for Digital Betacam, up to 40 minutes for S and 124 minutes
MPEG IMX is a 2001 development of the Digital Betacam format. It uses the MPEG for L tapes. In 24p mode the runtime increases to 50 and 155 minutes, respectively.
compression system, but at a higher bitrate than Betacam SX. The IMX format allows for
a CCIR 601 compliant video signal, with eight channels of audio and timecode track. It
Sont branded HDCAM cassettes are black with an orange lid, and HDCAM SR cassettes
lacks an analog audio (cue) track as the Digital Betacam, but will read it as channel 7 if
black with a cyan lid.
used for playback.

440 Mbit/s mode is known as SQ, and 880 Mbit/s mode is known as HQ, and this mode
Compression is applied in three different formats: 30 (6:1 compression), 40 (4:1
has recently become available in studio models (e.g. SRW-5800) as well as portable
compression) or 50 Mbit/s (3.3:1 compression) which allows different quality/storage
models previously available.
efficiency ratios. Video is recorded as MPEG-2 I frames only.

With its new IMX VTRs, Sony introduced some new technologies including SDTI and e-
VTR. SDTI allows for audio, video, timecode, and remote control functions to be
transported by a single coaxial cable, while e-VTR technology extends this by allowing
the same data to be transported over IP by way of an ethernet interface on the VTR itself.

All IMX VTRs can natively playback Betacam SX tapes, and some, such as the MSW-
M2000P/1 are capable of playing back Digital Betacam cassettes as well as analog
Betacam and Betacam SP cassettes, but they can only record to their native IMX
cassettes. S tapes are available with up to 60 minutes capacity, and L tapes hold up to 184
minutes. These values are for 525/60 decks, but will extend in 625/50. A 184 minute tape
will record for, as the label itself specifies, 220 minutes.

IMX machines feature the same good shot mark function of the Betacam SX.

MPEG IMX cassettes are a muted green, however, the new XDCAM format allows
recording of MPEG IMX on a tapeless format, Professional Disc.

[edit] HDCAM / HDCAM SR

HDCAM Tape

Vous aimerez peut-être aussi