Vous êtes sur la page 1sur 12

Shooting to Edit

A more effective way to shoot events is to "shoot to edit" by filming as events


unfold. When you shoot to edit, you never back the tape up -- you simply reshoot the scene, until
you are happy with it. You leave bad scenes and mistakes on the tape, to be removed later, in the
editing step.

When you shoot to edit, follow these hints from the pros, to make editing easier later:

• It's a good idea to take notes. When you edit, your notes will tell you which scenes are
good, who the players are, etc.
• Think of yourself as a story teller. The best videos have a beginning,
middle, and end. Make sure you shoot the scenes you will need to tell the story.
For instance, if you're traveling, shoot some road signs or wide shots to set the stage, so
viewers know where you are.
• Leave some room around each scene. Start recording before the action starts
and continue recording for a few seconds after the scene is complete. When you edit, this
extra room will make it a lot easier to get just the good footage without accidentally
picking up footage from neighboring scenes.
• Avoid the use of camera fades (fade to white or fade to black). If you will be editing the
footage later, it's best to add fades as you edit instead of using camera fades, which
cannot be removed or changed later.
• Some camcorders permit you to add titles as you shoot. While this is handy, it is often
better to skip the titles when shooting and add them while editing instead. Titles recorded
while you shoot are there forever and can't be changed. Separate titlers usually deliver
more fonts and sizes and titles of much higher quality than those built into camcorders.
• It is also a good idea to turn off the automatic time and date feature since, like camera
titles, they are on the scene forever.

Before planning any shoot, you must know how the footage is to be edited. This will make a radical
difference to how you approach the shoot. Primarily, you need to know if there will be post-production, or
if you will be editing in-camera.

If the footage is to be edited in post, it's helpful to know things such as who will be editing, what
equipment will be used, how much post time is available, etc.

If the editing is going to be "fast and dirty", then you shouldn't get too carried away with
the number of shots you provide. For example, if you record five versions of each shot, then
you may well find that the first version of each shot is the one that gets used. The other four shots have
only served to slow down the editor.

On the other hand, if this is an important production with emphasis on getting details right, you'll want to
get a few different
provide more options for post-production. In this case, it might be prudent to
versions of the important shots, as well as a few extra cutaways, etc.
, you'll need to plan your sequence
If the footage is to be edited in-camer

of shooting very carefully.


Planning a Sequence

Here are a few guidelines for planning a sequence of shots. Like all rules in this game, learn them and
use them before you start flaunting them.

• Begin each new scene with an establishing shot (usually a WS or EWS)


• Use combinations of the basic shot types. A typical shot sequence could be EWS, MS, CA, MS,
WS, etc.
• Plan transitions. How any two shots fit together is very important, and will determine how well the
video flows.
Avoid having similar shots follow each other. For example, don't have two WS's of the same
person back-to-back. This is called a "jump-cut", which is uncomfortable to watch - it looks as if the
person has magically jumped to a different position. If you need to get two shots of the same person in
a row, vary them between a WS, MS, CU etc. Other options are to use different camera angles, and of
course the CA (see below).
Also think about the audio transition. How does the sound from one shot flow into the sound from the
next?
• Don't linger. Once you've shown a shot long enough for the audience to take it in, move on to the
next. Most shots in television are less than 6 seconds long.
• Don't forget to use CA's (cutaways). They are very handy to avoid jump-cuts, but use them even
if you don't need to. CA's can add interest and new information.

There's an argument that says you shouldn't edit your own camera work. This is because you're too
"close" to it, and you won't see it as objectively as another editor. For example, if a particular shot was
difficult or time-consuming to get, you may be biased toward using it, whereas another editor will treat it
with the same ruthless disregard as all the other shots.

You can think of an image as a two-dimensional array of intensity or color data. A camera, however, outputs a one-
dimensional stream of analog or digital data. The purpose of the frame grabber is to acquire this data, digitize it if
necessary, and organize it properly for transfer across the PCI bus into system memory, from where it can be
displayed as an image. In order to understand this process and be capable of troubleshooting display problems, you
need to know the exact structure of the video signal being acquired. This document gives an overview of common
analog and digital video formats. For information on any non-standard video signal, please refer to the camera
manufacturer's documentation.

Table of Contents

1. Video Scanning
2. Analog Video Signals
3. Analog Standards
4. Digital Video Signals
Video Scanning
Standard analog video signals are designed to be broadcast and displayed on a television screen. To accomplish
this, a scheme specifies how the incoming video signal gets converted to the individual pixel values of the display. A
left-to-right and top-to-bottom scheme is used as shown below:

At an update rate of 30 frames/sec, the human eye can perceive a flicker as the screen is updated. To minimize this
phenomenon, interlaced scanning is typically used. Here, the image frame is split into two fields, one containing
odd-numbered horizontal lines and the other containing the even-numbered lines. Then the display is updated one
field at a time at a rate of 60 fields/sec. This update rate is not detectable by the human eye (remember AC lighting
operates at 60 Hz). Cameras that output interlaced video signals are usually referred to as area scan cameras.

For some high-speed applications, you want the display to update as rapidly as possible to detect or measure
movement accurately. In that case, you might want to update the display without combining the odd and even fields
into each frame. The resulting image frames would each consist of one field, resulting in an image with half the height
and twice the update rate as the interlaced version. This is called non-interlaced video, and cameras that output
signals of this type are referred to as progressive scan cameras.

Line scan cameras are a third type; they output one horizontal video line at a time. The frame grabber collects the
lines and builds an image of a predetermined height in its onboard memory. A variation on this is a mode
called variable height acquisition (VHA). In this mode, the frame grabber collects video lines into an image while
another input signal remains active. When the signal becomes inactive, the resulting image is transferred to system
memory. Line scan cameras are often used to image circular objects; for example, if you were to rotate a soda can in
front of a line scan camera, you could obtain a flattened image of the entire surface of the can. Line scanning is also
useful for conveyor belt applications, where parts are moving past a fixed camera. Often a detector is used to provide
a trigger signal to begin the acquisition when the object reaches the camera. In the VHA mode, a second detector
can be used to signal the end of the object, terminating the acquisition. This is extremely useful for applications in
which the objects to be imaged are of variable or unknown lengths.

See Also:
Conveyor Belt Applications
Analog Video Signals

An analog video signal consists of a low-voltage signal containing the intensity information for each line, in
combination with timing information that ensures the display device remains synchronized with the signal. The signal
for a single horizontal video line consists of a horizontal sync signal, back porch, active pixel region, and front porch,
as shown below:

The horizontal sync (HSYNC) signals the beginning of each new video line. It is followed by a back porch, which is
used as a reference level to remove any DC components from the floating (AC-coupled) video signal. This is
accomplished during the clamping interval for monochrome signals, and takes place on the back porch. For
composite color signals, the clamping occurs during the horizontal sync pulse, because most of the back porch is
used for the color burst, which provides information for decoding the color content of the signal. There is a good
description for all the advanced set-up parameters for the video signal in the on-line help for the Measurement &
Automation Explorer.

Color information can be included along with the monochrome video signal (NTSC and PAL are common standard
formats). A composite color signal consists of the standard monochrome signal (RS-170 or CCIR) with the following
components added:

• Color burst: Located on the back porch, it is a high-frequency region which provides a phase and amplitude
reference for the subsequent color information.
• Chroma signal: This is the actual color information. It consists of two quadrature components modulated on
to a carrier at the color burst frequency. The phase and amplitude of these components determine the color
content at each pixel.

Another aspect of the video signal is the vertical sync (VSYNC) pulse. This is actually a series of pulses that occur
between fields to signal the monitor to peform a vertical retrace and prepare to scan the next field. There are several
lines between each field which contain no active video information. Some contain only HSYNC pulses, while several
others contain a series of equalizingand VSYNC pulses. These pulses were defined in the early days of broadcast
television and have been part of the standard ever since, although newer hardware technology has eliminated the
need for some of the extra pulses. A composite RS-170 interlaced signal is shown below, including the vertical sync
pulses. For simplicity, a 6-line frame is shown:
It is important to realize that the horizontal size (in pixels) of an image obtained from an analog camera is determined
by the rate at which the frame grabber samples each horizontal video line. That rate, in turn, is determined by the
vertical line rate and the architecture of the camera. The structure of the camera's CCD array determines the size of
each pixel. In order to avoid distorting the image, you must sample in the horizontal direction at a rate that chops the
horizontal active video region into the correct number of pixels. An example with numbers from the RS-170 standard:

Parameters of interest:

• # of lines/frame: 525 (this includes 485 lines for display; the rest are VSYNC lines for each of the two fields)
• line frequency: 15.734 kHz
• line duration: 63.556 microsec
• active horizontal duration: 52.66 microsec
• # active pixels/line: 640

Now, some calculations we can make:

• Pixel clock (PCLK) frequency (the frequency at which each pixel arrives at the frame grabber):
640 pixels/line / 52.66 e-6 sec/line = 12.15 e6 pixels/sec (12.15 MHz)
• Total line length in pixels of active video + timing information (referred to as HCOUNT):
63.556 e-6 sec * 12.15 e6 pixels/sec = 772 pixels/line
• Frame rate:
15.734 e3 lines/sec / 525 lines/frame = 30 frames/sec

Analog Standards
The following table describes some characteristics of the standard analog video formats in common use:

Format Country Mode Signal Name Frame Rate Vertical Line Line Rate Image
(frame/sec) Resolution (lines/sec) Size
(WxH)
pixels

NTSC US, Japan Mono RS-170 30 525 15,750 640x480

Color NTSC Color 29.97 525 15,734

PAL Europe Mono CCIR 25 405 10,125 768x576


(except
France)
Color PAL Color 25 625 15,625

SECAM France, Mono 25 819 20,475 N/A


Eastern
Europe Color 25 625 15,625

Digital Video Signals


Digital video signals are produced by cameras in which the signal is digitized at the CCD array, rather than at the
frame grabber. Applications that call for the use of digital video usually include some or all of the following
requirements:

• High spatial resolution (larger images)


• High intensity resolution (bit depth)
• High speed
• Flexibility in timing or scanning characteristics
• Noise immunity

The timing signals for digital video are much simpler than those for analog video, since the signal is already digitized.
They include a pixel clock, which times the data transfer and can be an input to or output from the camera; a line
enable to signal the beginning and end of each video data line; and a frame enable to signal the start and completion
of each frame:

These signals, as well as the data itself, can be single-ended (TTL) or differential (RS-422 or LVDS).

There is no standard scanning pattern for digital video signals, so the digital frame grabber needs to be configurable
in order to be compatible with all the different scanning conventions available. One important factor in the type of
scan is the number of taps a camera has. Some cameras can output two, four, or more pixels in parallel. For
example, a 32-bit frame grabber (having 32 data I/O lines) is capable of reading four 8-bit pixels simultaneously. So,
the frame grabber needs to be configured to place those four pixels in the proper portion of the image. The camera
documentation specifies the exact order in which the image data will be delivered to the frame grabber.

201 ratings
Begin With the Edit in Mind

The production footage of a professional director is like a racing car in a kit. Though there are thousands of parts to
assemble, every last nut and bolt is present, beautifully finished and shaped for a perfect fit. Amateur footage can be
more like a wrecking yard: a skilled mechanic might improvise a jalopy out of scavenged parts but it ain�t gonna win
any races.
Directing like a pro means designing and fabricating the hundreds of shots in a video shoot so an editor can
assemble them smoothly. Though the details may be fearsomely complex (say, in a film like Titanic) the basic
principles are simple.
But first you must recognize that you�re not building the racer as you shoot. Instead, you�re machining parts for
the editor to assemble in post production. Professional directors start from the premise that a program is not a
straight documentation of its content. Instead, it�s a synthesis, a construction of small pieces that create the illusion
of a seamless whole because they fit together so tightly.
Think of it this way: when you tape a real-time event like a ball game, you�re simply recording what happens.
Editing in this case means weeding out the bad stuff. But if you�re shooting a scripted program, you�re creating
"what happens" one piece at a time. Editing a scripted program means picking and assembling the good stuff from
scratch. Most documentary editing is subtractive; other professional editing is additive.
With that crucial difference understood, you�re ready to consider the three C�s of shooting for the edit: coverage,
continuity and cutability. With these principles as guidelines, you can create all the components of an Indy 500
champion.

Get It Covered
Coverage means giving the editor enough material to cut with. In order to provide this coverage, you need one other
C: completeness. It�s amazingly easy to omit things by accident, which often happens because shooting is so
fragmented. It may seem obvious, but hey, don�t tape a prisoner opening his cell door without showing where the
key came from.
Beyond the obvious goal of completeness, coverage means overlap and repetition. In overlapping action from one
setup to the next, you allow the editor to start the incoming shot precisely where he or she has clipped the outgoing
shot (see Figure 1). Since the action continues across the cut, the uninterrupted flow conceals the edit.
It�s not enough, however, for the editor to start shot B just any old place before the end of shot A. As the
director/camera operator, you need to pre-visualize good edit points in the action as you stage it, so that at least one
or more are built into both shots. If a character snatches a letter from a table, there�s probably one edit point just
before she reaches for it, a second point just before she lifts it and a third point as she begins to open it. Covering all
three moments in both the A-roll and B-roll increases the editor�s cutting options.
Most professional directors go so far as to repeat entire beats (short segments) of action in multiple angles. The
classic way to cover your caboose is by shooting the entire scene in master shot (an angle wide enough to show the
whole action). Then you shoot the scene at least twice again from closer angles, to feature individual players and to
show details .
The old "master shot/his closeup/her closeup" scheme has long been a cliche, but it does at least deliver ample
footage to cut with. A more modern alternative is to replace the master shot with a custom-built setup (such as a
moving shot) that covers the entire action. This avoids the master shot�s static, stagy feeling while still providing full
coverage from at least three different angles.
A good director also provides coverage for protection. A protection shot anticipates a potential problem and delivers
the means to fix it. For example, suppose shot A shows a character getting into a car, turning around in the driveway
of a large estate and accelerating to the street. Shot B shows the car screech into the street and drive away .
Watching the action in shot A, you wonder if all that backing around might get boring. To protect yourself, you shoot a
quick closeup of the driver as he reverses and then starts forward. By cutting in this protection shot, the editor can
omit most of the backing footage and lose much of the driveway as well. An action that took 30 seconds in real-life
now lasts only ten on the screen.

Cutaways, Inserts and Color Shots


To complete your coverage of a sequence, capture cutaways, inserts and color shots. Cutaways show material other
than the main subject. For example, a cutaway might show a gardener reacting as the car roars off down the
driveway.
An insert reveals a detail of the action or provides extra information. A hand turning an ignition key is an insert that
shows detailed action, while a closeup of a fuel gauge reading empty delivers information. If an insert or cutaway
follows a shot of a performer noticing or looking at something, the shots are sometimes called a "glance/object" pair.
Color shots are cutaways that add to the general atmosphere rather than provide information about the action. A wide
shot of the garden party on the lawn behind the driveway would be a color shot.
Cutaways and inserts make great buffer shots. By displaying totally different content, a buffer shot softens viewers�
memories of the shot that preceded it. So if the editor wants to shorten a dull passage, correct an action mismatch or
screen direction booboo, or repair a performance mistake, a cutaway or insert at the fix point will often conceal the
surgery.
Incidentally, color shots don�t buffer as well because they�re unmotivated. They�re great for establishing the feel
of a new locale, but they can seem intrusive when they interrupt the action. If you cut back to the lawn party in the
middle of the car�s screaming exit, the audience wonders what the party shot is for. For more details about
cutaways, check out The Wonderful Cutaway in the June 2000 issue of Videomaker.

Simple video editing is now so easy that there's almost no excuse for not shaping your raw
footage into snappy programs that keep your viewers cheering, or at least awake. To
achieve this, you need to start editing in your head while you're shooting, in order to
produce raw footage pre-designed for smooth cutting.
Shooting to edit can be a sophisticated craft, but the basics are so simple that you can
remember them with just three words: "coverage," "continuity" and "cutability." So let's add
these Three C's of Shooting to Edit to Videomaker's famous Seven Deadly Camera Sins
(Camera Sinners, Repent!--September 2002 issue) and Seven Golden Rules for Composition
(The Seven Golden Composition Rules--November 2001).
Coverage
Coverage means providing the editor with enough footage to show viewers all the essentials
of the place and events you're taping. (For clarity, we'll refer to the editor in the third
person, even though you probably edit programs yourself.) Good coverage means:
* Orienting your viewers
* Capturing essential shots
* Including telling details
Let's run through them.
Remember that your viewers can't see anything outside the frame. So, be sure to include an
establishing shot, a wide-angle view that takes in the whole scene. (If the locale's a big one,
you may need to pan across it.) Once viewers see that the birthday

I'm always thinking of the edit while I shoot.

I always cringe when I hear a DP, camera operator or director say: "Don't worry about that now, we can just fix it in
post." You almost never hear that coming from the editor, the guy whose job it is to "fix it in post."

I know, because I'm often the editor. Other times, I'm the DP or the producer, but I look at every aspect of the
production with the edit in mind. I have to, because that's where it all comes together.

As the project gets underway, I think in terms of effectively telling the story from all the angles that lead up to the edit.
For example, is the lighting, camera placement and pacing contributing to the editor's ability to bring this about?

This actually becomes more important when I have to, as we all often do, play all of the production roles myself. The
time I spend as the DP on improving, simplifying or insuring consistency between shots, saves me, as the editor, time
and money - which also makes me, as the producer, very happy.

All of those are visual aspects of production, and each deserves a closer look. For now, let's focus on the part of the
story that you hear.

THE INTERVIEW

As one example, let's consider the interview. Whether your interview will be part of a documentary, a news story, or a
corporate video, the idea is the same. Prepare. Then listen.

There's a popular idea that documentaries, in particular, magically appear out of mountains of footage. Be objective,
shoot everything, and let the footage tell you the story. That's oversimplifying the idea - but the idea itself is
oversimplified, and usually wrong.

Of course the content you gather in interviews will help shape your story, but have a good idea of the content you
need before you begin, or you wouldn't be able to make a list of questions. Prepare first. Having an idea of what you
need is your best chance of getting it.

This is true for every kind of interview. As an example from my experience in corporate video, the chances are good
that you will interview an executive who's not getting paid by the hour. They'll consider the interview to be a waste of
time, and won't be there long enough so that you can "shoot everything." If you don't prepare, it will waste everyone's
time and could be the last job you do for that company.

So you've prepared. The scene is set and lit to perfection. You're ready to roll. Now what? With the content you're
hearing, start building the edit in your head. Start by listening to what the interviewee is saying. As you listen with the
edit in mind, you'll be able to shape the unfolding story.

Are the answers or anecdotes delivered succinctly? If not, you might be able to ask the question a different way, or
ask something more specific to get what you are looking for.

Now, you can't allow yourself to be so rigid with your vision of what the final piece should be that you don't recognize
something better, even magical happening in the interview. The important thought here is that being prepared, and
starting the edit while the interview is underway, not only allows for the opportunity of better storytelling but
encourages it.

Whether it's the story you started with, or the one that emerges as you listen, do the answers or anecdotes help the
telling of the story? If they raise more questions, ask them, even if they're not on your list.

Does the interviewee either work the question back into the answer or in some way make the context clear? If not,
the content you get, no matter how wonderful, may simply not fit into the final piece. It might only be fit as one of
those deleted scenes on the Special Edition DVD - which is never going to happen.
DEVELOP THE FLOW

Listen more closely, as the editor. Is there enough of a pause between question and answer (and the next question)
to make an edit? If not, change the pace of the questions and the flow of the interview.

Do you hear paragraphs? Any help you can give the interviewee to start grouping their thoughts will make your life as
an editor incredibly much easier.

Note that none of these suggestions involve putting words in the mouths of the people you interview. The point isn't to
make them tell your story the way you want it told. By carefully listening, and by redirecting as needed, you help your
subjects make their own thoughts clearer, in a way that you can actually use.

Think of it this way: The people you interview are writers. By helping them state their thoughts more clearly, you're
acting as the editor.

To circle back to where we began, listening closely while you start to edit the piece in your head can also give you
clues about visual material you need. Are there gaps in the story, awkward pauses or even camera bumps that you
need to cover with B-roll footage? Are there graphic elements or photos that might also help tell the story? Might the
person being interviewed be the best source for them?

These are a handful of things to keep in your head as you prepare for the interviews you shoot, and while you're
shooting them. They're part of what it means to "work smarter, not harder," which embodies the approach I strive for
in all aspects of the storytelling process.

Keep the edit in your head through the whole process. Prepare, so you know what you need. Listen, so you know if
you have it. If you don't have what you need, find a way to get it before you leave the site.

You can't fix it in post if you don't have it!

BASIC VIDEO SHOOTING AND EDITING ERRORS

By Angela Grant
Jason Kandel of the Los Angeles Daily News emailed me early this month and asked me to critique his
video about firefighters preparing for fire season.
There are some basic video editing and shooting errors in the video. I will illustrate the errors with
screenshots, and then provide solutions to fix the problems for next time.
WIDE, MEDIUM, TIGHT

Nearly all shots in this video use the focal length illustrated in these thumbnails. While the shots do
sort of show me the scene from different angles, they’re still not showing me anything meaningful
because I can’t see any details. If I turned off the sound in this video, I’d have no ideawhat these
people were doing. I wouldn’t even know they’re firemen because I can’t clearly see their uniforms! To
fix this problem:
 For everything that you see and want to record, remember that you need to shoot (at
least) four shots. A wide shot, medium shot, and at least two closeups — Face and hands.
 You should not be shooting two wide shots or two medium shots in a row. But shoot as many
closeups as you see!
 Remember that your viewers will be watching on the Internet and the video will be small. In
wide shots, they will not be able to see significant details. Closeups are king because you can
focus your viewers’ attention on what really matters in your story.
 If you shoot sequences like this, you’ll have enough footage to edit a compelling story that
continuously shows people new and interesting things. You’ll have enough footage to cut every
3-4 seconds, creating a fast-paced and exciting story.
Here are two more examples of scenes that would have really benefited from multiple sequences of
wide, medium and tight shots.

JUMPCUTS

In the introduction we see about four shots of a helicopter flying and dropping water on the hillside.
These shots are all the same focal length – wide shots – and they all show the same scene – hillside –
with the same subject – helicopter. When you edit shots back to back with the same focal length,
scene and subject, it produces a jumpcut. The viewer sees an erie and unrealistic thing: Your subject
magically disappears and then appears again in another location, with no logical visual explanation of
this disappearing act. To fix this problem:
1. Instead of using all four shots, just track the helicopter for a longer period of time and
just use the one long take.
2. If you really want to use more than one shot, perhaps to show the water bucket
dropping more times, you must shoot a bit differently. While shooting, allow the helicopter to
fly OUT of your frame instead of tracking it in the air the whole time. Once it flies out of your
frame, you can show another shot of it flying somewhere else in the air without the visual
disconnect that happens with a jumpcut.
3. If you really want to use more than one shot of the helicopter, then shoot other shots
to edit in between them. For example, shoot the face of a person on the ground who is
watching the helicopter fly through the air. Or shoot the picture of the ground getting drenched
by the water bucket.
One more jumpcut example about one minute into the video:

SHOW, NOT TELL


Finally, I noticed this video does a good job at providing information in the narration, but it did a poor
job at providing information in the visuals. The whole story seems to be tell, tell, tell, with a few
pictures thrown in there in hindsight. Video is a medium to show people things. If you’re showing
meaningful things, your video will be able to do something that your words will never be able to do. To
fix this problem:
 Follow the advice about shooting sequences. Get great closeups. Shoot and edit together a
killer visual story.
 You must write to your video. That means that you shouldn’t be including information in your
story unless you have the visuals to back it up.
 When you’re done editing, turn off your speakers and watch your video. Does it still make
sense? If so, you’ve done a good job at telling a visual story.
 Now turn on your speakers. What you hear should add an entire new layer of information: You
don’t need to include information that is already conveyed in the visuals.

Tips Before You Get Started


Before jumping into something complicated, know your requirements first:
1) What you really need — in terms of equipment and software — to make it work and
happen.

2) The concepts you have to understand in order to use any of the popular editing
packages.

3) A title at the beginning

4) A set of “shots” cut together in a nice way to tell a story

5) A fairly high number of shots

6) Interesting transitions between the shots

Vous aimerez peut-être aussi