Our first two installments in this series covered proper ensemble recording and EQ, which like any musical skill will improve with practice. For the brave and/or crazy folks out there, this article will discuss editing a stereo recording by combining multiple recording takes. This is valuable not only for recording, but for editing together songs (with permission) for color guard or dance routines.
I must note that before submitting any edited recordings for competition or evaluation, please read and follow any rules regarding edited recordings. With a little practice using the techniques below you will be able to edit in ways that are virtually inaudible, so we are all on the honor system here!
The process of recording and editing together several takes can produce a great and polished album for distribution, but it also provides students with valuable, real-world experiences. (Some may even call these career skills or cross-curricular learning on a lesson plan!) I would frequently record in sections with my students and create an edited recording, then challenge them to produce the same results live. This was a great way to teach critical listening and inspire young musicians to stay focused through an entire piece.
Splicing
In order to splice together live audio, there are some important rules to follow during the recording process. I have unfortunately learned all of these the hard way.
Recording Rules for Editing:
1. Be as consistent as possible. Definitely keep the recording levels and microphone positions the same, but also pay attention to AC/heating systems, or other noise-making objects, to ensure that each take has identical background noise. Our ears have learned to tune these out, but they will become painfully obvious in a sound recording.
2. Encourage your students to be very consistent with bell angles and to be very still. Avoid shuffling feet, creaking chairs, dropping mutes, loud page turns, et cetera.
3. Resonance resonance resonance! I cannot emphasize this enough. In most cases we do not get to record in acoustically treated environments; therefore, our rooms resonate. Begin every take by “warming up the room” with at least five seconds of sound before your intended editing location and end every take with the same amount after your intended ending point. The most common error is wanting to splice in at rehearsal letter B, and starting right on letter B, and later noticing a drastic tone difference between spliced sections of music. Which brings us to our last rule…
4. Record everything like you are going to use it. So many times the section of music you think you want to use for an edit ends up not working out and you need to use material you recorded at a different time. Also ensure to record equal amounts of all sections of the piece. So many rehearsal and recording sessions focus on the first half of a piece then run out of time and just play the end as one or two long takes, which also ends up with students playing on noticeably tired chops!
The Editing Process
Start by lining up all your full runs and sections in separate tracks. This makes it easy to see and find how many takes you have of musical material. The waveforms will have slight discrepancies, so look for the louder, clearly defined waveforms and zoom in as close as possible to line them up. This will look something like Figure 1.
Figure 1: Getting organized.
To make an edit, you will actually overlap an identical section of audio and crossfade the takes. As a general rule all of this should happen in less than .015 seconds, and therefore will be inaudible to the human ear. Use the time base in your audio editor set to seconds to keep track of the length of your edit. In most audio editors, including Studio One, you can simply hit “X” to create the crossfade, as in Figure 2.
Figure 2: Crossfade.
But wait… it’s just not that easy. When cutting out sections of your recorded audio and choosing where to actually make a crossfade, pay careful attention to always cut on the “zero line”. This is the center line of your waveform track and represents the neutral position of your speaker cone.
Science lesson time: The vibration of a speaker cone, or driver, is a direct representation of a sound wave. The sound wave moving above and below the zero line translates to the speaker driver moving forward and backward from its resting position.
So why does this matter when editing? If your crossfade point is based on waveforms that are separated far off the zero line, then the speaker driver will get confused and try to jump quickly from one position to another, which interrupts the smooth motion. This creates a sound of a “pop” or a “click” on your edit. Larger and louder speaker systems will create larger and louder pops, so what you may not hear on your cheap ear buds will end up haunting you on a large system! Be sure to edit on studio monitors so you can hear all the details. See Figures 3 and 4 for good and bad examples.
Where to Splice?
When selecting where to make an edit, look for the place where the waveforms line up best and the zero line is most accessible. A splice does not have to happen right at the beginning of a measure or a musical section, and in fact, that is the most obvious place people will expect them. An edit can be anywhere, so look for the easiest place to make the splice instead of the most musical.
I find that it is easier to make an edit from a softer sound into a louder sound. Look for the beginning of a note, preferably with a well-defined articulation. See Figure 5 for an example of a good editing place. With practice, you will learn to easily identify these and learn to “read” waveforms much like a computer programmer reads code.
Figure 5: Ideal Editing place.
Did I mention resonance? Listen to your edit carefully to ensure that the two takes have the same resonance and balance. A good splice is very hard to hear, but a sudden change in ensemble balance, a sudden loss or gain of resonance, or the abrupt addition of AC/heater noise in the room will give away a splice every time. Through this process, you will learn about how to properly structure a recording session using the rules above in order to create takes that will fit together.
Adding Effects
Effects should be added after all the editing is complete. Personally, for a live recording I do not see the need for using too many plugins or effects, with the exception of reverb if your room needs it. However, I always caution folks to use reverb sparingly. Reverb is like a spice in cooking, and too much of it can ruin a dish. It must be used carefully in combination with proper recording techniques and EQ. The best advice here is to add reverb gradually until you clearly notice that reverb has been added to the track, and then back it down one level. I think of reverb like vibrato, meaning it should organically add to the overall quality of a good sound and not stand out on its own.
The other place where digital reverb becomes noticeable is the end of a note followed by silence. Find a section of audio that ends loudly and is followed by silence, and then use this to judge how much added reverb “holds over.” You may find yourself reducing the amount of reverb even more based on this test, which is perfectly fine.
Figure 6 shows an example Reverb plug in, in this case the Room Reverb in Studio One. While there are many parameters to reverb, the two you should know are Length and Mix. Length is the amount of time in seconds that the sound will sustain, so this is a quick way to adjust the amount of reverb when you listen at the end of a note. Mix is a percentage of the mix that is reverberated sound. This setting can be used to add or subtract overall reverb from the recording.
Conclusion
Editing can drive you absolutely mad, especially if you are a detail-oriented person. Since most music instructors have high organizational skill sets, then most of us are susceptible to the endless quest for editing perfection. Here are a few tips to help keep you on track:
1. Set a time limit. If you have two hours to edit you will make smart decisions and prioritize which cuts you want to make and which ones you do not.
2. Use large sections. I sometimes set an eight or sixteen measure minimum on myself to prevent too many splices. This segues perfectly into our last tip…
3. You can’t fix everything. When listening to a recording many times you will notice more and more places where you want to splice in, but always take a step back and think forest instead of trees. Realistically speaking, if a recording needs ten edits in two minutes than the ensemble should rehearse more than they record.
In the next installment, we will focus on multi-track recording and explore different types of recording microphones.
John Mlynczak is president-elect of the Technology Institute for Music Educators, director of education for PreSonus Audio, and a frequent clinician on music and technology at conferences and school districts across the country.