At RJI, we’ve been working to improve how we share information with our readers.

Subscribe

Objectives

  • By the end of this lesson participants will:
  • Edit video news stories
  • Edit audio news stories

Non-Linear editing

Veterans in radio and television news can recall times when editing was a time-consuming, awkward chore. Both film and audio tape needed to be physically cut and glued or taped together into the sequence the editor intended to present it. On film the sound track ran several frames head of the picture that it was associated with. The editor needed to provide some additional pictures (called B-roll) that ran on another projector to cover the edits. The tools of the audio editor included a crayon-like soft pencil to mark the spot on the tap that needed to be cut and a razor blade to do the cutting. The cuts were taped together in the order the editor intended to present them.

Even in the early days of videotape editing, it was done sequentially. Video shots were assembled one at a time from one tape machine to the next. A mistake required the editor to re-do the editing from the point of the mistake.

Today editing is non-linear. Editors can work on any part of a video or audio package and assemble the parts as a later step. Mistakes can be cut from the project with new material inserted without having to re-do the balance of the project. These modern editing systems also allow editors to insert pictures or other audio segments over the primary shot without having to use a B-roll on a second projector.

Lesson Summary

We divide the editing process into five steps. The steps apply both to video and audio editing. They are: Capture, Assemble, Mixing Sound, Inserting Effects, and Exporting.

While some news organizations are still using older editing methods, we focus in this lesson on those systems that treat video and audio as computer files and use computer software to accomplish the editing. We also will describe the general steps of the editing process in these pages. Each editing system will have somewhat different names of these steps, and the buttons, knobs and labels for the steps will be different; however, the processes are similar.

Video Standards

Several measures describe the way video is captured, stored and presented. These include frames per second, presentation formats, compression processes and file sizes.

Compression. Original digital video signals contain so much information that they must be compressed to reduce file sizes to practical levels for routine processing and storage. This compression samples the original signals and stores that sample. Software tools called codecs are used to code and decode the process. This compression is done in two ways. Interframe coding samples each frame individually resulting in larger files, but easier editing that the second process -- intraframe coding. In it, the process compares one frame to the previous frame and stores the difference.

Two compression types are lossy and lossless. The former is used frequently with multi-media files. It reproduces approximate versions of the original. The latter reproduces reproduction of the of the original used primarily for data that must be restored to the original for subsequent use.

Frames per second. To simulate movement video records and then presents several individual frames per second. The PAL format is 25 frames-per-second. NTSC is 30 fps. High definition video is shot at 24 fps. The editing process transforms the playback to the appropriate speed.

Bit rates. Bit rates describe the amount of information about a file that is processed in a given period of time, often a second. There are two elements -- size and rate. Size (e.g. 16-bit) identifies the number of pieces of information that are sample each time a sample of the original is taken for compression. Rate (e.g. 94 khz) describes the number of sample made in a time period, usually a second. The goal is to use as low bit rates as possible to ensure fast transmission while presenting an acceptable quality.

Frame sizes. Video display is described as lines or pixels. The display of standard definition television (before high definition) was described in the number of visible horizontal lines scanned -- 625 in the PAL format used in most countries of the world and 525 lines for NTSC format used in the United States, Canada and Japan. The ratio of width to height in standard definition is 4:3.

Displays on computer screens and for high definition television are described in pixels, small electronic units that store color information about a tiny piece of the picture. The displays are described in the number of pixels across the screen in the horizontal and vertical dimensions.

Beyond the screen ratio of 16:9, there is not a single standard for high definition television display. The two primary standards are 720p -- that describe a display of 1280 by 720 pixels. The p is used to describe progressive scanning in which the presentation is done by a single scan to produce the picture within a single frame. The other format is 1080i -- for display of 1440 x 1080 pixels. The i represents interlaced scanning  -- two scans of alternating lines to produce the picture.

The actual presentation of video through the Internet or on computer screen from DVDs or other storage devices maybe substantially small in the number of pixels than the maximum for those displays. These smaller pictures presentations require significantly smaller files and thus speed the display of the video.

Display Formats. For download from the Internet the developing standard is MPEG-4, developed by the Motion Picture Experts Group. The primary display programs for streaming video from the Internet (video that is actually not downloaded) include the QuickTime, Macromedia Flash, and Windows Media Player. The primary standards for video display from computers are AVI for Windows and QuickTime for Macintosh.

Audio Formats. As one might expect, audio signals are much less complex than digital video signals; therefore, they can more easily be stored and processed in uncompressed formats. The most common of those are .wav format for Windows and .aiff for Macintosh computers. The most common compressed audio formats for playback are mp3, a proprietary format created by the Motion Picture Experts Group. The oggVorbis format is an open standards alternative to the mp3 format.

Capture

Process. Capture is the process by which the video and audio files are transferred to the editing system to processing. If the files were created in the digital format, the transfer can be accomplished by direct transfer of the files with no format changes. The standard for this transfer is called IEEE 1394 standard. It was initially developed by Apple for the Macintosh computers and was named Firewire. That proprietary name has been adapted to describe the standard that supports Windows computers as well.

If the original file was shot in analog format or the files are transferred from VHS or other analog storage devices, the devices, the format must be changed in the capture step in order to be processed by the digital editing systems. The process is known as digitizing.

Space requirements. Given the substantial files sizes for video, the hard drive storage capacity must be substantial. An hour of video might require as much as 13GB (gigabytes). The amount of memory in the processor should also be large to enable the computer to do the processing with reasonable speed. A radon access memory (RAM) of 512MB is a minimum.

Windows File Systems. Older versions of Windows operating systems used a file system called FAT32. Among its features was to limit file size. The replacement system is known by the letters NTSF. It removes the limitations of the previous system. It should be used for digital video editing with Windows operating systems.

Transcoding. When the original video signals are compressed using the interlaced process, individual frames may not standalone. The codec records only differences from the previous frame. The process will record an independent frame frequently to establish a basis for the subsequent frames. These anchor frames are designated I-frames. Many editing systems may require that interlaced video be transcoded into an independent frame format for ease of editing. This transcoding adds substantially to the file size.

Logging and Timecode. Timecode is an essential non-visual data element that is used to keep the audio and video signals in sync. In the recording process each frame is given its own label by time on the hard drive, tape or memory stick and frame number. The photographer can assist the editing process by keeping a manual log of the shot order and approximate place on the hard drive, tape or memory stick of individual shots.

Graphics and photographs. Beyond the video and audio, a video production may also require graphics or still pictures. These should be developed and imported to the editing system before the editing process. Raster graphics and photographs should be stored in file sizes larger than the probable output because when resizing is required, enlarging seldom produces satisfactory results. Reduction is substantially better. Raster graphics and photographs are stored as pixels (bitmaps) in contrast to vector graphics that are stored as geometric descriptions that can be resized without loss of quality.

Continuity

Continuity includes tools and techniques photographers use to present the story. We introduce the concept of continuity in this editing lesson because editors create the sense of action, sequencing, time flow, etc. from the material given them by the reporter and videographer.

Stories usually have beginnings, middles and ends. So too does the video story or the multi-part still-picture story. While the post-shoot editing process will actually build the sequences and over all continuity for the story, the photographer should edit in the shooting process. Being aware of what an editor will need and providing those shots more or less in editing order will speed the editing process and enhance the overall quality of the story because the photographer presented essential shots. Here are the primary tools and techniques photographer use in this story telling process.

Scenes. Scenes are a series or sequence of pictures from different angles, showing different aspects of the same activity that, when assembled, produce a video segment for part of a story. A scene sometimes is a single shot, more often it is composed of three or more shots that make up what is called a sequence. Sequences are linked together to produce the story.

The terminology for the shots within a sequence includes these: Wide shot, medium shot, close-ups, reaction shots, cut-ins, cut-always. Another way to understand sequence is to shoot the action within the scene and the reaction to it. A simple example is the interview sequence. The photographer photographs the person being interviewed (close-up) and then the interviewer (close-up and reaction shot). The sequence could also include a shot of the two persons (medium shot). It might also include the interviewer taking notes (cut-in), the nervous finger tapping of person being interviewed (cut-in), a wide shot of the interview location, a photo of the report on which the interview is based (cut-away), etc. Each of the shots should be at least five seconds long to proved for editing and assembling the separate shots into the presentation sequence. The interview shots will be longer of course.

Opening shots. While it is often tempting to begin a sequence with a scene-setting shot -- the exterior of the building where the meeting is taking place, etc., this is seldom the best idea. In the editing process, the producer should select the most powerful picture in the story as the opening shot and build the first sequence from there. In shooting the video the photographer must find one or more potential opening shots -- the compellingly interesting shots that define the story. The photographer then builds a sequence around those shots. Opening shots can be any of the types of shots listed above including a close-up of some part of the action. For example, a story on retail sales might open with a close-up of an exchange of money.

Closing shots. Both sequences and stories themselves need closing shots. For story closings editors often want to use the second most powerful shot to close a story if possible. The editing will produce a closing sequence that ends with that shot. In shooting the story the photographer must shoot pictures that enable the editor to build that final sequence. There are some classic closing shots that involve a person or thing moving away from the camera into the distance -- a car driving away down the street, a person walking away from the scene, back to the camera. A second type of closing shot shows the work in the story continuing -- for the retail story it might be customers continuing to shop.

Closing shots for sequences provide a way for the story to move from one location to another or one subject to another. Here is a list of common sequence closers.

  • Out-for-frame. The subject leave the frame. The camera does not follow. This enables the next shot to have the same person in a different location.
  • Exit. This is a version of the empty frame in which the subject leave or enters a room or building through a door.
  • Cut-away. This is a shot of something related to the sequence, often a book, report, or other non-human element.
  • Reaction shot. This is a shot that shows reaction to the primary activity of the sequence. It could be a cheering crowd at the end of a game to enable the next shot to be the post-game interview.
  • Blur focus. This is a shot in which the photographer blurs the focus of the scene to be merged with a blurred focus that opens the next sequence.
  • Fade out. This effect can be added during editing. The visibility of the current shot fades and is mixed with a fade in to normal brightness of the opening shot of the next sequence. This also can be done in shooting where the iris of the camera is closed. It reopens slowly on the first shot of the next sequence.
  • Continuing action. This can be a shot of continuing production at a factory, school classroom or other place with repetitious action. It tells the viewer this part of the story continues while the next phase of the story opens somewhere else. The primary limitation is those shown in the continuing action shot should not appear in the next sequence.

Other continuity shots. Here are other shots that help build the story sequences:

  • Silhouettes. One can open a shot with the background properly exposed and detailed with the subject in silhouette. The photographer can change the focus and aperture to reveal the subject properly exposed for detail.
  • Walk-in. This opposite of the empty frame in which the subject leaves. In this shot the camera begins taking the shot before the subject enters.
  • Discovery. This shot opens with the focus on one item, not the primary subject of the sequence. When the primary subject enters the camera follows that subject to develop the sequence farther.
  • Move through the frame. In this shot the camera watches a subject approach from a distance, continue across the frame and exit near the camera. The next shot can be of the subject leaving.
  • Slow/Fast motion. To produce this shot the photographer must record at a faster speed than will be played back. This can be accomplished in editing. The reverse process -- shooting at a slower speed than will be played will speed the action.

Video Editing

Overview. In this section, we use the word editor to mean the person doing the editing. We will use "the system" to mean the editing software program. In non-linear editing the original files stored in the editing system are not disturbed in the process. The editor will create an editing project and import various excerpts (copies) of the shots, audio, graphics and photographs into the project's library.

The editor may elect to work on various parts of a large project before beginning to assemble the final project. This preliminary work can include trimming various shots, modifying the shots with filters or audio adjustments. This preliminary work can also include putting together various major sub-sections of the larger project.

When the editor has selected the parts of the final project and put them in the order in which the editor wants to present them, it may be necessary to save the files in a process often called rendering. Rendering merges the various elements into a single, finished version of the project. The final step is to export the final project in the appropriate format for the intended playback medium.

Basic editing methods. The compatibilities of the editing system enables or limits the choices of editing methods and editor can employ. The most basic consumer editing system will limit editing to putting shots into a one-after-the-other order with a few techniques to transfer from one shot to the next. At the other end of the scale, advanced editing systems enable the editor to separate the insertion of video sound, graphics and still pictures, enhance the quality of video and audio segments. Here are brief descriptions of the primary editing tools.

Storyboard editing. This tool enables an editor to pen a project, import various video with their attached audio and put them into a one-after-the-other sequence. Thumbnail pictures of the opening frame of each shot are displayed in the order in which they have been assembled -- thus the name storyboard. The pictures do not tell the editor how long the various elements are. This elementary system limits the editor's ability to adjust pace. Storyboard editing can be used in audio-only editing, too.

Timeline editing. Timeline editing is the advanced alternative to storyboard editing. In this process, the individual elements of the project are placed on a timeline beginning with zero time and extending to whatever the full length of the project becomes. Thumbnail pictures from the first frame of each video element appear at the place on the timeline where the editor has placed them. the program will show by change in color or lines drawn along the timeline the length of the individual element.

The projects's timeline will appear in a horizontal bar usually near the top of the editing system's project window.

Multiple tracks.Advanced editing systems have multiple tracks in which audio, video, graphics and photographs can be inserted. Editors can insert various elements of the projects and using the in and out commands signal the system which tracks to activate at a specific time. The tracks appear in a vertical stack below the project timeline.

Assemble editing. Assemble editing is used by editors to put shots in a one-after-the-other sequence. Editors can use assemble editing with the storyboard systems or timeline systems.

Insert editing. Insert editing allows the editor to separate the insertion of video from its related audio. With this format, for example, and editor may insert the audio of an interview while showing pictures of the person who is speaking doing what makes them newsworthy while not seeing them speak the actual words. Insert editing requires an editing system that permits the editor to use separate tracks for various elements. These separate tracks are then merged in the rendering step.

In and Out commands. Although an editor may have trimmed a shot to the length the editor thinks will be appropriate, when the editor reviews the editing, he or she may discover that the clip should begin or end at some other time. The editing system will enable the editor to insert an "in" command to tell the system when to begin to play this particularly shot. The editor can also insert an "out" command to indicate when this playing this element should quit. The in and out commands can also be using to tell the system when to quit using the video or sound on one track and begin to use it on another.

Overlapping edits. A technique to enable the editor to achieve an appropriate pace for a project is the insertion of sound a few frame ahead of its related video. One might hear an auto starting before seeing the car. The alternative of seeing before hearing is also a tool available to the editor. This technique requires multi-track editing systems.

Transitions. Transitions describe what techniques the editor uses to move from one shot to the next. Sometimes the photographer can insert transitions such as softening the focus between shots. Most of the time the editor will insert transitions in the editing process. Often the transitions between shots ar just cuts -- the last frame of the first shot appears and the first frame of the second shot appears next. This is particularly true inside a sequence. Other transitions are more likely between sequences. These between sequence transitions can symbolize time passage or permit movement from one location to another A common transition beyond plain cuts is the fade in which the editing system fades out the first shot was the final frames of that shot are being played and fades into the next show while its first frames are playing. Another common transition is the wipe in which the new shot replaces the former shot by appearing to wipe over it. Many editing systems contain a host of prepackaged transitions -- flips, twirls, spins, etc. Unfortunately, these transitions often call more attention to themselves than helping to advance the story. Editors should use these "fancy" transitions rarely.

Drop and Drag. Advanced editing systems permit editors to easily insert video, audio, graphics or photographs into a timeline by selecting them from the project library and dropping them in to an appropriate place in one of the tracks along the project's timeline. The editor can then select them on a timeline and drag the thumbnail to the left or right along the timeline to refine the placement.

Video filters. Many editing systems have tools to enable the editor to adjust the brightness or contrast of a video clip. Other flitters can adjust the colors and transform a segment to black and white, etc. News editors use fewer of the exotic filters than theatrical editors because of the reality limitations of journalism.

Audio Editing

Overview. Audio can be edited as part of a video package or as separate audio-only production. Audio tracks appear horizontally across the editing window below the timeline. Audio appears in the tracks as a series of vertical lines. Lines that fill the track top to bottom are louder than shorter lines. Lines are also clustered to indicate the frequency of the sound. Editors can insert in and out marks to trim individual audio segments and to indicate to the editing system when it should play a particular portion of the track.

Audio for a project can appear on a single track (monaural) or two tracks (stereo). Editors can work on the two tracks simultaneously (commonly) or for special effects.

Levels. Editors can adjust the loudness of the audio segments and projects. Digital reproduction has limits on the loudness. Audio that is louder than the system maximum will be deleted and the sound may be distorted. Editing systems will have displays to indicate the loudness of an audio segment and knobs, sliders or other devices to adjust the levels.

Transitions. Editors may also want to insert transitions between audio segments as the editor may do with video segments. The most common transition is a cross fade. The editors will place the second clip on a second timeline and being it to play before the first clip has finished. The editor will lower the volume if the first segment and at the same time increases the volume of the second segment. The second segment will reach full volume as the first clip ends.

Filters. There are a series of filters available on many editing systems to address specific issues of audio quality. The commonly used filters include the following:

  • Normalization. This filter can increase or decrease the amplitude (loudness) of the sound of an audio clip to reach some target level set by the editor.
  • Equalizers. These filters can give editors control over specific frequencies in the audio segment. They might be used to remove or reduce some specific annoying sound on the audio segment. These can be used to remove a buzz or hissing sound. Too much use of the filters will create an audio segment that does not sound natural because the primary audio may also contain some of the frequencies of the unwanted sound. They will be removed along with the unwanted sound.
  • Echo or reverberation. These filters can insert copies of the original sound to make the playback sound with an echo or to vibrate as if projected in a large hall with the sound bouncing off the walls. These are seldom used for news work.

Ambient sound. The reporter or photographer should record some of the natural sound of the location of the shooting or recording. This ambient sound can often be useful to provide a sense of continuity when it is played in the background during edits of the primary sound.

Finishing up

When an editor has assembled and inserted all of the video, audio, graphics and still pictures, the project can be saved. The saving will enable the editor to re-open a project at a later time to make additional changes.

To get the project ready to play on a system outside the editing system on which it was produced requires the editor to merge all of the tracks into a single presentation. This process is often called rendering. For large projects rendering can take a substantial time and editors will need to provide for that time in calculating the deadlines for news production. The rendered project will be given another file name and file extension.

The final stop may be to convert that rendered file to other formats for playback. When that is does, the editor can export those converted files with yet other names and extensions.

Scripting

Each news organization will have its own script terminology and format. You can learn those specific applications as part of your orientation to that workplace. For the lessons in this workshop we will deal with the primary elements that are likely to be present in any format.

Those primary elements include a presenter's lead-in to the piece (video or audio), a description of the type of audio or video element, the length in time, and the outcue that is the final words on the audio or video piece, and the tag or follow-up line for the presenter to read, if any.

The lead-in should be considered part of the story and written as such. The presenter can give the "nut-graph" of the story -- a sentence or two about the importance of what is to follow -- or the presenter can deliver the lead of the piece with the story package that follows picking up in the second paragraph of the story.

Here are some variations on presenting audio or video elements inside a broadcast on Internet news bulletin.

  • Voice over. In this case the presenter will read the introduction and continue to read the story as video comes on screen replacing the picture of the presenter.
  • Sound-on-tape. This is a label left over from the era on using audio and video recording tape (now replaced by all digital files). In this case the presenter reads a lead-in to an excerpt of a reporter or news source saying something (a sound bite). The sound bite often is followed by another sentence or two from the presenter about the story.
  • Package. In this case the presenter reads a lead-in sentence or paragraph introducing the reporter in the process. The reporter pics up the story from that point. The package often concludes with a standard outcue (the reporter stating his or her name, affiliation and location).
generic script format

Share

Recommended for You

Related Stories

comments powered by Disqus