Have you ever worried about losing your job to a robot or a software program? Sometimes in our industry, the same products that are designed to help us be more creative, achieve greater technical results or show a client in advance what they are going to get for all the money they just dished out can be the bane of our existence. Software bugs, technical limitations or just simply being unknowledgeable of a particular product or feature can be a showstopper. This is exactly what we were faced with when we were asked to projection map imagery on the American Airlines arena in Miami for Phish’s run of shows over New Year’s Eve.
The Project Goes Smoothly…At First
We chose to use Avolites Ai media servers to drive the eight 40K projectors that we provided for the multi-day event. Because of the many successes we have had with Ai since we adopted it a year earlier and its ability to completely previzualize a project in a true 3D environment, it was a natural choice. We also knew from our experience that it would have no problem with the massive 4K files we would need to play back to cover such a large area. Lastly, we knew that with Ai’s ability to adapt to onsite changes, there wouldn’t be much that we could not deal with.
We started our usual process of creating 3D models of the venue, placing the projectors in the environment and working out the photometrics. We were right on the edge of the very wide .8 lenses we would require to fill the space because of a very lintier throw distance, but we were comfortable with the results and the math was working out. We started to make the custom content that was based on Phish’s iconic imagery, logos and branding and present some of the views from Ai to the client. All was well in the virtual world!
On load-in day, we started to build the two projection towers that would each house four 40K projectors. The first one went up fine, with the photometrics spot on; distances and angles worked as planned. When we moved to the second tower, we were told that we would not be able to place the tower where it was originally approved, we worked on the photometrics and relayed to the venue where were would be able to place the tower and still fall in the photometrics of the lenses and angles. We agreed to move 10 feet forward and have a small amount of compromise to the image size. The venue told us we needed to move more than 25 feet. This was a problem, as we were already on the edge of the lenses.
We did a little more math, updated the Ai previz file and explained to the client the amount of compromise that would need to be made. Options were none, and we moved the tower. What we quickly discovered was that we were only able to overlap our projectors by about 5 percent instead of the planned 20 percent, and we now had such extreme angles that we had elongated pixels on the convex and sweeping wall that is the architecture of the AA Arena.
We knew that out largest issue was going to be the quality of the edge blend with the varied sizes and angles, and with such a high profile event and being the first to ever be allowed to map on the area, it had to be perfect. The first four came in to place very quickly using the tools in Ai. The second, much closer tower was proving to be a very big challenge and got to the point that we were not going to be able to pull this off. We tried every trick in our books, on the server, on the projectors, in the content, and nothing was able to account for these extreme angles and very minimal overlap of the projection. It was the night before the show, and we only had three hours left before we were required to turn the projectors off for the city and we were out of options.
Epiphany Time
At LDI just one month earlier, the Ai team introduced their new auto blend technology that used a simple and inexpensive USB webcam to do blending and geometry correction. Because I had never used it and had only seen a very short demo, I was very skeptical, and time was extremely limited. I also didn’t happen to have an extra USB camera just hanging in my pocket either. I sent one of my team to the little strip mall close by to find any type of USB camera.
As the search was on for a USB camera, we added the new module into our project file and read through the short instructions on how to use it. We went through about 15 minutes of trial and error to understand how it was supposed to work, tried a few options and got comfortable very quickly on the way the process was supposed to work. When a (very overpriced) USB camera arrived from a local tourist shop, we plugged it in, installed the simple driver and decided to give it a try. Following the guidelines, we placed the camera where it was able to see the entire surface. This is about 30 feet behind the projection tower and at a much different angle and location than the actual projectors. With a broomstick and some gaff tape, we made the tiny USB camera stable and started clicking around in the new feature of Ai. We had one failed attempt (due to user error), but five minutes later, magic started to happen, and I literally mean magic.
The projectors each shot out a test pattern in various sizes, shapes and colors, and the progress bar in Ai starting going up. After three minutes of this and a simple tap of the “complete and upload” button, a perfectly blended, shaped and color-corrected image appeared, showing the Ai logo and test pattern. In just a few moments, it had taken into account the limited overlap, the unusual geometry and diverse photometrics and simply worked. While we are advanced Ai users, it was still the first time we had ever used this feature and part of the software. It was amazingly intuitive, straightforward and worked every time. We proceeded to do the same process on the other projector set that was already acceptable by all of our standards. Once it had finished its process, the results were even better. On either projector set, the blend zones were completely non-existent, even with full white or saturated content. We had a success that turned a potential nightmare into a fairly tale story.
Since then, we have done extensive testing and trials with the new version of software and this feature set to great results. Unusual geometry, complex blend areas, different grade of projects and varied geometry have proven to be no problem for Ai’s Autoblend feature. We have even had great success when using different types of projectors, lenses and lamps in the same project. This feature, coupled with the many and constant updates to the Ai suite of software and hardware, make Ai a complete and solid package for a very wide range of video and lighting projects. This includes projection and pixel mapping, automation control, interactive media, VJs and, of course, the live, touring and installation markets. We’ve been using Ai for all of them.
Scott Chmielewski is principal of DMD (Digital Media Designs) S7udios, a North Miami, FL-based firm he founded in 2005. For more information on DMD’s recent projects, go to www.dmds7udios.com.