91B0FBB4-04A9-D5D7-16F0F3976AA697ED
C9A22247-E776-B892-2D807E7555171534

Author Michael Lawrence is an independent FOH engineer and system tech who worked on the Hamilton College Theatre Department’s recent production of West Side Story. This article originally appeared on ProSoundWeb and is reprinted here with permission.

My first real foray into event production came at the age of 14, running the lights for my school’s production of Oliver! The Musical. I badgered the venue’s tech director with endless questions and learned a lot.

Soon after, I saw a more metaphorical light and switched to audio, but the TD’s willingness to teach me has not been lost on me. Without that generosity, I might not be working in production today, and so “paying it forward” is something I feel strongly about.

I was recently given the opportunity to do so while working as the A1 for Hamilton College’s production of West Side Story, staged in Wellin Hall, the largest theater on the school’s campus in Clinton, New York.

The production’s sound design had a lot of moving parts: 24 channels of wireless mics on actors, a full live orchestra in the pit, and audio cue playback from the projection-mapped video system, to name a few. That’s a lot of plates in the air, but luckily I had the assistance of three awesome students who expressed an interest in helping out with the production and learning more about theater audio.

Getting Started

My first consideration was how best to deal with the gain-before-feedback realities of multiple omnidirectional condenser microphones on stage in a very reverberant space (a concert hall with an RT60 of about 2 seconds) and a full live orchestra. Although I know the space well, musical theater productions add a bunch of elements that I’m not as familiar with. In a quest for some perspective, I reached out to Dan Moses Schreier, the sound designer for the 50th anniversary Broadway production of West Side Story.

Dan graciously listened as I explained my plans for the design and gave me some tips. He suggested using photographs of the actors’ mic placements to ensure consistency each night. He also gave me some great advice about how I might approach miking the orchestra, and thanks to his wisdom I felt a bit more prepared going into production week.

Next up was the system coverage. Due to sightlines and the fact that the orchestra pit cover would be open for the show, my usual front fill positions were unusable. The college brought over three JBL 3 Series active studio monitors to fill the three main coverage gaps – the front outer edges and a “center gap.” An unconventional fill box, but they got the job done while being as small and unobtrusive as possible. To prevent an audience member from bumping a front fill into the orchestra, theater department TD Tom Marhenke built brackets to secure the front fills to the structural steel.

The show’s production design placed the traditional plot in a modern context, and the playback cues include several instances of approaching police vehicles as well as the infamous gunshot that kills Tony at the end of the show.

Production manager Jeff Larson and I had ongoing conversations about how to create a sonic environment that was extension of the narrative, rather than simply playing sound effects at the audience through the PA. We hid a JBL PRX powered loudspeaker behind the upstage wall of the set. This created some very convincing police siren “whoop whoops” which filled the performance space with a diffuse sound as if the squad car was just around the corner, causing the onstage gang member characters to scurry away.

The fateful gunshot that closes the show presented a dilemma both creative and technical: with such a well-known show (based on the even more well-known Romeo and Juliet), most of the audience is familiar with the play’s tragic ending. The moment still needed to be shocking and impactful, even though everyone knows what’s about to happen.

All the previous playback cues in the show were set to a relatively low level, reaching the audience only via indirect sound from the loudspeakers buried behind set walls. The gunshot, by contrast, came from a loudspeaker placed on a backstage rigging access landing, positioned to have direct line of sight to the majority of the audience seats. The effect was quite loud – on par with a real gunshot – and the direct loudspeaker aim created a sharp, harsh sound that the audience would localize towards the actor holding the gun. The result was quite startling, even to someone who is expecting it.

Another concern was tackling 24 frequencies of RF in a decidedly hostile environment, with multiple DTV stations transmitted just miles from the theater. Since the rig was entirely Shure ULX-D, we were able to connect all the receivers to a network switch and wireless router.

The units all popped up in Wireless Workbench 6, and a live scan allowed us to coordinate our frequencies around the local TV and other RF sources of nefarious origin. I made use of the program’s ability to “prioritize” channels, making sure the frequencies assigned to lead characters were the most stable. I then pushed our coordination results out to all the receivers and sync’d the packs via IR, thus making quick work out of what would otherwise have been a tedious couple of hours.

The Process

I made a decision early in the process to stick to theater audio best practices as much as possible so my student A2s – Jane, Henry, and Sophia – could get some relevant experience should they choose to pursue audio careers. For me, best practices are all about achieving consistency and preventing problems before they occur. The four of us developed a daily routine that consisted of fresh batteries, checking the physical condition of each mic pack and element, and “wargaming” the RF system to detect any frequency interlopers.

About 10 minutes before actors showed up to get into mic, the team on stage would switch on and line check each pack while I monitored the audio via headphones at front of house. This process allowed us to catch any mic elements that might have been damaged or broken during the show’s extensive fighting and tumbling choreography, and quickly swap in a new element.

Our last step before the actors arrived was to review photos of the mic placements from the previous night and discuss any placement changes I wanted to make going forward.
We wanted to get the actors into mics, checked, and offstage quickly as possible so the venue could open doors, but I also didn’t want to rush through check only to discover an issue during the performance.

Jane and Henry would help the actors fit their mics, checking placements and making sure the tape was comfortable, while Sophia would tell me via comms which actors had been fitted and were making their way to the stage for check.

The actors would come out and run a few lines while I made any necessary tweaks to EQ, and then we sent them on their way. By the second performance, we had become a well-oiled machine, getting the entire process down to about 15 minutes.

Putting It Together

Originally, I planned to mix the show on a Behringer X32 so the A2s could get some experience with a board they’d be likely to encounter early on in their careers. I built a showfile using James Holt’s excellent X32 Theater Control software, which allows line-by-line VCA-style mixing automation as utilized by Broadway and West End theater productions.

However, as the complexity of the show grew, I migrated to my Midas Pro1 so I could accommodate the show’s total 41 inputs as well as take advantage of more comprehensive automation abilities, digital snake redundancy, dynamic EQ for an actor with an unruly 1 kHz rasp, and POP groups to quickly access needed inputs and outputs. The main console structure had the show’s inputs being mixed into a stereo pit bus and a stereo vocals bus, which were then distributed to the various PA zones via matrices.

Although the pit orchestra was plenty loud enough and well balanced acoustically, I added two violin mics, a cello mic, and a stereo DI from the keyboards to allow me to bring some focus and clarity to the musical lines, as well as accentuate the natural swelling dynamics during certain passages.

Bringing the pit mics up in the mix spreads the perceived image of the orchestra both horizontally (the mains are spaced wider than the pit) and vertically (the mains are flown) so rather than simple level control, the pit mics allowed me to literally grow and shrink the audience’s perception of the orchestra to complement what was happening on stage. I also used a 1.8-second hall reverb on the string mics during key moments to provide a drippy, dreamy, romantic texture.

Problem Solving

Due to the brief tech schedule, my team and I only had two rehearsals of the show before opening night, and we had a problem: this cast had an unusual number of very strong singers, and I noted some serious distortion issues when actors were singing loudly or shouting (something that happens often in this show). An occasional clip never hurt anyone, but this was detracting from the performance in a big way.

After the dress rehearsal, I walked up to speak with director Mark Cryer about the problem. I hate having to do this, because my goal is never to have to bother a director with an audio issue. I aim to have everything taken care of, so the director can deal with other things.

Mark must have seen my frustration, and before I said anything, he said, “Hey, man. I know you’re working on it.” Mark was letting me know that he had confidence in me and was giving me the professional space to solve the problem – not a small gesture the night before opening.

The A2s and I went through the gain structure of the entire system, and after placing a call to Shure to confirm that I was correctly understanding the gain structure of the ULXD system, we arrived at the conclusion that the actors were overloading the mic elements themselves.

Cheating the placements slightly further back on the actors’ faces gave a slight improvement, at the cost of reduced gain before feedback and far more room bleed. In such a reverberant space, with a full orchestra just feet away, and with so many open mics during some moments, this was not a trade-off I was willing to make for some characters.

In these situations, my job is simply to do the best I can with what I have, but I wasn’t happy with the sound of the show, especially since I felt that the sound of the overloaded elements was really interfering with the emotional moments of the performance. The mic elements we were using were common models from two reputable manufacturers, but I needed a better tool for this particular job.

Last-Minute Change

I put in a “Hail Mary” call to the folks at Point Source Audio in California to explain the problem I was having, and 18 hours later, a box containing three of the company’s CO-8WL lavalier omni microphones arrived on my doorstep. I drove up to the theater early, miked myself up, went to my console at FOH, and started talking.

As soon as I heard myself through the PA, I actually said “Wow!” out loud. My voice sounded incredibly clear, even in front of the PA, and I was able to reduce the channel EQ to almost nothing. I stood onstage and screamed some lines from the show, with no audible distortion.

Normally, swapping three lead characters to a completely different mic without rehearsal would be inadvisable to say the least, but I felt confident enough in the improvements to give it a shot. I’d reserved a few extra minutes with the stage manager to do some tweaking during sound check, but I didn’t end up needing it.

During the performance, the CO-8’s performed admirably, staying far more linear during the loudest yelling passages. The mics’ higher SPL rating allowed me to revert to the closer placement, achieving a cleaner sound with much less bleed during the show’s quietest moments. I was able to push the level of Maria’s closing monologue, allowing her emotional performance to really shine through.

Several members of the creative team visited the mix position to mention how much better it sounded, and the actor who played Tony came running up to shake my hand after the performance, saying “Wow! This microphone!” There’s no foldback in the show, so the fact that he perceived such an improvement just by hearing what was bouncing off the back wall of the theater is high praise indeed.

A friend of mine who’s an experienced professional musician attended the show and afterwards told me, “That’s the cleanest dialogue delivery I’ve ever heard.”

The Right Path

As engineers, we’re trained to listen critically and to pay attention to small details that many people might miss altogether, so it’s easy to worry disproportionately about minutiae. But when set designers and stage managers remark on the improvement, you know you’re really on the right path.

A huge thanks to Point Source Audio for their gracious assistance in making this show sound incredible. For me, this show was equal parts teaching experience, learning experience, and a great opportunity to exercise some artistic expression.

After the countless hours of planning, preparation, programming and rehearsing that my team and I spent doing our small part in bringing this complicated production to life, it was a beautiful moment when all the tech became invisible and the story came shining through.

 

Help us provide an accessible education, offer innovative resources and programs, and foster intellectual exploration.

Site Search