Larry Loewinger Sound Recording
>BIO
ARTICLES
LINKS
CONTACT

Sound "Digital Recording in the Field", by Lawrence Loewinger
American Cinematographer  June 1986.  Vol. 67, No. 6

Though this article describes some unusual film projects it is really about new technologies which make them possible, and how these technologies can be combined to produce extraordinary results. A few words about technical terms machines: Sync reference refers to the time based signal, usually generated by a crystal oscillator that locks, resolves or slaves the speed of one machine to several others. Sync reference equals time base. The standard sync reference in film is neo-pilot tone sync pulse or its more recent variant, FM stereo sync pulse. Both are simple sync references. SMPTE Time Code uses a time base as its reference to generate time information which can be read visually. While the film sync pulse can only lock machines together, time code can both lock machines together and provide a continual read-out of the exact location on film and/or audio and video tape. From a film point of view, on the set time code acts as a combined sync reference and slate and in post production as both a sync reference and as electronic edge code numbers. For all practical purposes, the Song PCM-F1 is the equivalent to the Nakamichi DMP-100. Both are PCM processors and both are manufactured by Sony.

Drums Across the Sea, my Cuban adventure, is a film about the roots of Afro-American music. Directed and photographed by Les Blank, famous for many music films and Burden of Dreams, the behind-the scenes glimpse at the making of Fitzcarraldo, this project took us a mere 90 miles from Miami but into a very foreign world--Cuba. This was an ambitiously conceived project to be shot in Super 16mm and to be recorded with as large a multi-track rig as we could squeeze onto the charter airplane. Remember, there is not air freight between the United States and Cuba. The center piece of our audio set-up was the Otari Mark III/8. We surrounded it with two of the English made "Location" Mixers, 16x2 and 8x2, the latter for monitoring. We also brought a stereo Nagra for simple location work and a raft of microphones, much cable including an 8x8 multi-channel cable or "snake", Boston A40s as our monitor speakers, an RTS communications package and a video monitoring system. The latter, which allowed me to mix off the set and still see what was going on, proved invaluable.

Video monitoring is common practice for any large scale audio/video production. It is also very useful for feature work. This is not an everyday film package and, as I quickly realized, would require a permanent yet mobile installation. The Cubans offered us one of their production vehicles, a sturdy if primitive Russian truck. In a day we had transformed it into a simple mobile studio. It was neither elegant to look at nor was it comfortable to work out of but it got the job done. Where this rig was too cumbersome or there simply wasn't enough time for setup we made do with the stereo Nagra and a simple stereo mike configuration.

The producers, Howard Dratch and Gene Rosow, together with Les Blank, had previously scouted the musical groups they wanted to film. As they soon discovered, scouting a group in Cuba and actually filming one is quite different. Some groups slipped through out fingers and others appeared to take their place. Our month long stay in Cuba was an exercise in continually changing plans and constant negotiations with the Cubans. Flexibility is not the byword in a third world country, especially in a socialist country. Nor does a recording truck move with the alacrity of a documentary crew. The Cuban officials we dealt with were always cooperative. The countryside we saw on the streets and, inside from one of the film institute's buses, was fascinating.

As the month evolved we solved our problems and moved with ever-increasing speed. Most of the events we filmed were small ones which, whenever possible, we had staged out-of-doors. These, Les Blank filmed alone and I used the stereo Nagra. Most of the normal documentary recording was done by his perennial recordist, editor, and co-director, Maureen Gosling. For the larger concerts, we used two cameras and the truck. That meant finding enough alternating current to power our lights and the truck. Generally that was easy but in one instance, when we were filming "Irakere," one of Cubas most popular bands, I discovered just before showtime that the voltage was only 90. Luckily, the band carried its own voltage regulator which it shared with us.

Though I didn't know it at the time Drums Across the Sea was a rehearsal for the next three projects. Our sync reference was a standard 60 Hz sine wave which we recorded on the Hafflex POM Crystal Checker specially modified for the project. I devised a windscreen to house the M-S stereo microphone pair. Otherwise, everything was used as it came out of the box.

Mide-Side, (M-S), stereo miking is probably the best simple stereo for film and video recording, as it assures you absolute mono compatibility and some flexibility in post production. The technique involves two microphones, one front facing which is usually but not limited to a directional pattern mike and the other side firing bi-directional or figure-of-eight microphone. Technically, the front facing mike represents a combination of the left and right channels, (L+R), while the bi-directional mike creates the difference channel, (L-R).

In practice, the front facing mike records the direct information and the bi-directional mike records the ambient information. Normally, the information is recorded as two channels of discrete mono. Making normal left/right stereo requires a matrix, a simple, single control black box. The most prevalent one is manufactured by Audio Engineering Associates of California. The matrix permits you to adjust the width of the stereo image by increasing or decreasing the amount of the L-R channel. To obtain the full advantage of M-S stereo, on location I would recommend recording the tracks as undecoded M-S, matrixing it into Left/Right stereo in post production. For monitoring purposes, however, one should carry a matrix so that everyone can hear the likely end product. Living in the back of a Russian truck with less than state-of-the-art springs, the Otari 8-track proved to be a rather hearty little beast. Because one of the possible products from the Cuban filming was to be analogue discs/records, I enhanced the Otari's performance with dbx noise reduction. As a noise reduction system dbx reduces tape hiss considerably but its response to transients is less than ideal. On our first night of multi-track recording I did not have time to install the dbx system. When comparing the tapes of that night with the others I have to admit a slight preference for those tapes.

There is no greater contrast than that between manicured lawns of Tanglewood, Massachusetts, and the crowded, aggressive, sensual world that is Cuba. The Boston Symphony Orchestra spends its summer at Tanglewood. A little less than two months after I got back I received a call from Albert and David Mayseles, long time documentary filmmakers, pioneers in the cinema verite` style, to work on their film about the noted Japanese conductor, Seiji Ozawa. If a conductor's life is music his instrument is the symphony orchestra.

Classical music is sometimes recorded in multi-track format but more generally in real stereophony and with digital equipment. In this case, stereo was a requirement of the film's sponsor, Columbia Artists Management. How to square these requirements with the more limited exhibition capabilities of 16mm. Moreover, we were to do multi-camera shoots in the Tanglewood Shed during actual performance which would require a silent, discrete slating system and considerable co-ordination between the various camera crews.

On this film, entitled Ozawa, I worked in several capacities. I did some of the documentary filming. As is customary in these situations, a music producer was brought in to handle the recording of the Boston Symphony Orchestra. The producer, Max Wilcox, who has a long association with the orchestra, hired recording engineer John Newton, to do much of the hands on work. My role was to co-ordinate their work with the filming, especially to monitor synchronism and to record the many other musical activities in which Maestro Ozawa was involved.

Recording in stereo always raises the question of mono compatibility and exhibition/distribution potential verses expense. The answer to the first question is a qualified yes either as an interlock projection or in video cassette or television broadcast form. As regards cost effectiveness there is no ready answer.

Unlike Cuban popular music, classical music lends itself to simple stereo miking techniques. The nature of these techniques determines the degree of mono compatibility. I used the M-S technique for the rehearsals, classes and performances I covered, knowing I was giving the editor, Deborah Dixon, and the post production mixer, Lee Dichter, the flexibility to manipulate the stereo image according to the dramatic needs of the film. Max Wilcox employed a variant of the traditional American stereo technique--spaced omni microphones. The virtue of this technique is that it reproduces a more reverberant field which, in the dry acoustical environment of Tanglewood, an open shed, is useful. The drawback results from the fact that, unlike M-S, which relies only on amplitude changes, (loudness), to create stereo, spaced omnis depend on both amplitude and phase, (time), differences to make stereo. Spaced omnis do not collapse into mono as effectively as a co-incident technique such as M-S. Several high end microphone manufacturers are either making or planning to makes single unit M-S microphones with matrixes, suggesting that this will become the dominant simple stereo mike technique of the future for film and video work. In fairness to Wilcox and Newton, when I attended the film mix of Ozawa, I heard stereo musical performances of the Boston Symphony that were sonically splendid.

On the Cuban project the Sony PCM-F1 was little more than a glint in my mind's eye. On Ozawa it stood in the wings. All the large scale recordings were done on stereo Nagras with Dolby A noise reduction. The Sony PCM-F1, feeding into a Beta video recorder, provided the digital backup. The sync reference was the standard film reference, 60 Hz, which was recorded in the normal manner down the center track of the stereo Nagras. In an unmodified PCM-F1, which we used, its crystal clock operates at the standard NTSC time base of 59.94 Hz. To insure film sync on the digital recording we laid down a 60 Hz since wave on the analogue track of the video deck.

We now take film synchronism almost for granted. The film cameras and tape recorders we use have provided us with an extraordinary degree of reliability. But as we move more readily between film and video, as the film market becomes more international, and as SMPTE Time Code begins to replace 60 Hz sine wave as a film sync reference, synchronism is no longer a simple matter. There are some basic facts to remember. If your crystal clocks are of a high accuracy you can record using different time bases and, provided the materials are resolved to the same time base(s) at which they were recorded, they will be in sync. In practice, however, it is far better to film/tape and audio record using the same time base. That means if you want to record digitally using the Sony PCM-F1 you should modify it to accept an outside sync reference, whether it be sine wave or a video signal. The key is to know the time base, find out the post production techniques to be employed and, wherever possible use time code rather than sine wave as a sync reference. Time code has the considerable advantage over sine wave of being a numerically visible clock, always keeping you in sync and informing you continuously of your exact location or "address." Once time code can be effectively applied to 16mm or 35mm film it will simplify set slating procedures and rapidly accelerate post production. At that point film can adapt itself as readily to the computer as video has done.

It was many months later when Susan Froemke, the Maysles' long time producer, asked me in hushed tones over the telephone if I would show up at the Steinway piano showroom in Manhattan. So began Vladimir Horowitz: the Last Romantic. Horowitz is a legend in the concert world. Best known, perhaps, as a recording artist, it was a rare event to film him in performance in his New York studio and to catch him in conversation with his wife, Wanda. Once again the same production team- the Maysles brothers and Peter Gelb of Columbia Artists Management- went into action. John Pfieffer, Horowitz's long time record producer, was hired to supervise the music recording. Like Ozawa, the Horowitz project was intended for multiple release, but with one notable difference. In addition to a premiere stereo screening at Carnegie Hall, video cassette release and television broadcasts, we were asked to produce the materials for a compact disc and an analogue record set. The task was to make sure our technology could accommodate this multiple format release.

Logistically, Horowitz was the simplest of these four pictures. It was filmed in one place for six days over a two week period. By filming standards the hours were quite civilized. Technically, it was the most complex. We had two weeks to design a system to create a uniform code whose time base and frame reference we wanted to be useable in post production and devise a stereo mike configuration which would be discrete, if not invisible, to the cameras, sound good and somehow avoid the noise the film crew would necessarily make. When I write "we" I really mean we on this particular project. The technical success of Horowitz was due in no small measure to the support I received from New York's audio community, film and video workers alike. Bill King, and Mike Shoskes were my on-set colleagues. Jerry Bruck of Posthorn Recordings, John Hampton of Star Tech Audio, Dave Smith of Editel, Guy Genin of Zellan Optics and Paul Yaeger and David Leitner contributed their share of useful information and assistance.

For the first task- to generate a 24-frame-rate time code whose time base we could make 60 Hz- we chose the new Nagra IV-S TC time code. We then acquired a Sony PCM-F1 modified to accept the 60 Hz from the Nagra IV-S TC as its time base. Note that most of the modifications done in New York or Los Angeles on the PCM processor permit it to accept either an audio or video sync reference. We used the Nagra IV-S TC to generate time code and sync. A second stereo Nagra was used to record the dialogue, mostly through Audio Ltd. wireless microphones. Two video recorders, one VHS and the other U-Matic, stored the digital music. Two video decks offered several advantages. While one was used as backup for the other, we could also dedicate each VCR to a particular purpose. The Beta U-matic machine provided tapes for the digital record editing. The VHS machine, an industrial grade Panasonic 6800, operated as a 6-track audio recorder--two digital channels, two VHS HiFi channels and two linear audio channels with Dolby B NR. During the shoot we maximized the audio capabilities of the 6800 for music, backup dialogue, time code and sync pulse.

The most unusual choice we made was in our miking technique. Maestro Horowitz's studio is no larger than an ample living room. There was no way we could place mikes in the room without them either interfering with the camera crew or vice versa. The solution to this dilemma was simple. Into the piano itself went an M-S stereo pair in the form of a Schoeps boundary mic, omni directional in pattern, and a bi-directional capsule. From that point on the mikes were never seen and, from the microphone's point of view, the crew was never heard. That meant that the piano sound, which was dry and close up, was to be enhanced in post production, much like a rock n' roll record.

While I think it was a clever solution to a vexing problem, it created its own set of issues which only became apparent further down the post production chain. In the finished film the piano has a definite ambient sheen to it. The background noise is close to inaudible. The piano was recorded digitally and then transferred to 35mm magnetic film with Dolby A for the mix. Because the Horowitzes speak softly I had to use more than a normal amount of gain to get them on analogue tape. If you add together the noise generated from the radio mics, a mixer, the Nagra, several generations of tape and magnetic film, plus the electronics of the mixing facility you get a noise level which sounds enormous only when compared to the almost complete silence behind the piano. It was really a normal noise level in all circumstances save this one. When the editors, Deborah Dixon and Pat Jaffe, brought this problem to Lee Dichter, who was once again the post production mixer, he came up with a simple solution. After a musical selection ended he would fade up the background level before introducing the dialogue. This would accustom the audience to changes in the background level.

The day after I finished Horowitz I began Krush Groove, the rap musical directed by Michael Schultz and photographed by Ernest Dickerson. It was a delicious feeling to traverse the extremes of western culture in twenty four hours. Rap music is a kind of declamatory poetry set to music. At its best it envelops you in its incantatory magic; at it's worst it give you a headache. Luckily we had the cream of rap musicians performing for us in an intense five week shooting schedule. Krush Groove is a feature fiction film but with a good dose of documentary reality. Many of the performers in the film are playing themselves, telling us about their own lives. The story is of a fledging record label and how its founders fight the battle to establish themselves and their artists in a none too perfect world. It is performed as a comic musical fantasy but not without some dramatically poignant episodes.

Schultz and his other producers decided to record the location sound digitally and to do all the play back digitally. They contracted with Ruxton Video in Los Angeles to supply the machinery for digital recording and to do the editorial services of transferring to video cassette and syncing up all of the dailies. Ruxton sent two PCM-F1 processors modified to accept an outside sync reference, a 60 Hz video generator and a small oscilliscope. The scope is necessary to verify video lock which means sync. In New York we added a time code generator, and we got our carpenters on the crew to build a traveling case for all of this gear. At the outset I was skeptical that this consumer and industrial grade equipment would submit to weeks of grueling shooting. My skepticism was misplaced. I was also concerned that the sound crew, given the enormous amount of equipment we carried, would be able to keep up with the production pace. In addition to the digital equipment, we occupied as much space on the camera truck as did the camera crew and their equipment. Once again my concern was misplaced. The high skill of my crew was such that we either kept pace with production or were ahead. Each of us discovered that the application of new technology meant redefining work roles. For my digital equipment operator, Michael Cohen, working on a feature was a new experience. He grasped the technical concepts quickly and became particularly adept at doing playback from the video cassette recorder. My boom operator, Jerome Vitucci, also acquired some new responsibilities in maintaining equipment and running the sound crew. On Krush Groove the Song PCM-F1 and the VCRs had become the primary recording and playback machines. The Nagra was now consigned to a backup role. If we lacked for anything on this job it was places to put the multitude of signals we were recording or re-recording. Often we needed but never- had an 8-track recorder.

Krush Groove was the last of my hi-tech adventures, and it raised the most pressing questions. In it we were using digital technology to replace the Nagra and 25 years of recording practice. The first question is primary: Why do digital recording in the field? Unquestionably, it has the potential to sound better, for dialogue work the improvement is marginal, especially when you consider the effects of the post production chain. For music and effects, however, the improved frequency response and the markedly increased dynamic range, make digital audio a striking advance over analogue.

On the set the Song PCM-F1 is a true 2-channel recorder, offering complete channel separation. That means radio mic tracks will not bleed into the boom track and a pre-recorded playback will remain completely independent of the live performer you are recording. There are drawbacks to the PCM-F1 as a field machine. Try and read its meters at high noon. Run the digital processor and an accompanying VCR on DC and you will discover what a voracious appetite for batteries they have. The powering sequence between a modified PCM processor and its outboard sync reference is critical. The sync reference must be switched on first and inserted into the processor before the latter is turned on. Otherwise you won't get sync lock.

Digital's real potential lies in post production but only if the audio signal remains in the digital domain. Standard film post production requires about four generations of transfers before an optical print. This results in an inevitable loss of high end frequency response and a rapid increase in noise. Each of these conditions has an effect on dialogue intelligibility. Ironically, the appearance of Dolby Stereo sometimes highlights the problem. By offering an exponential improvement in the quality of music and effects and an incremental advance in the quality of dialogue recording it can heighten the contrast between location and post production sound.

Generational loss is not a problem in digital audio. It can be reproduced in principal and in careful practice ad infinitum without any signal degradation. Too, digital audio has an easier relationship to SMPTE Time Code and computer analysis than does sprocketed analogue audio. That makes it susceptible to minute, precise and flexible kinds of electronic editing. If anything, digital audio is apt to demand more skill from those of us who work in the field and offer more creative choices to those of us who work in post production.

return to article menu

home  .  bio  .  articles  .  links  .  contact