Production Sound & Video

Fall 2018

Issue link: http://digital.copcomm.com/i/1057466

Contents of this Issue

Navigation

Page 38 of 43

39 network-attached storage devices (such as SmallTree's TZ5 or Avid's ISIS & NEXIS platforms). The moment the recordist hits the Stop button, he or she can open the files on a computer and bring the newly created clips into a nonlinear editing application in order to assess their viability. This method eliminates the intermediate process of utilizing memory cards, transfer stations, and shuttle drives in favor of writing directly to external storage and thus removes both the time and risk associated with manual offloading. It also offers instant piece of mind to both the person handling the media and the production as a whole that the work that has been done throughout the day is, in fact, intact and ready for post-production. And this is only the most basic of network-based workflows. By utilizing advanced encoder systems, such as the aforementioned mRes platform, multiple tiers of files can be distributed across multiple pieces of network-attached storage. This gives the recordist the ability to simultaneously create both high-quality and proxy-grade video files and to make multiple copies of each in real time as a scene is being shot. This eliminates the potential need for time-consuming transcodes after the fact and, more importantly, this instant redundancy removes the key period of danger in which only a single fragile copy of the production's work exists. As a result, recordists can now unmount network drives mere minutes after productions wrap and turn them over for delivery to post with one hundred percent certainty that there are multiple functioning copies of their work from the day. There is no need to spend several hours after wrap each day offloading cards and making backups. Or, to take things a step further, productions can take advantage of the inherent beauty that is the internet to skip the need for the shuttle process altogether. It is possible to create files in a manner that sends them directly to a post-production edit bay. With low bitrate files or a high-capacity upload pipeline, recordists can set up their workstations using transfer clients (such as Signiant Agent or File Catalyst) to take files that are created in a particular folder on their network-attached storage and automatically upload them to a cloud-based server, where post-production teams can download them for use. This process has the distinct advantage of sending editors new files throughout the day in order to accommodate a tight turnaround. Conversely, for productions where the post-production team may be located on site, a hard line can be run from the recording network directly to the edit bays. By assigning the post team's ISIS server (or comparable network attached server) as a recording destination, editors gain access to files while they are recording. In cases such as this, the production may opt to use "growing" Avid DNxHD files. This format takes advantage of Avid's Advanced Authoring Format in order to routinely "close" and "reopen" files, allowing editors to work with them while they are still being recorded. For productions with incredibly tight turnarounds, this is the single fastest production to post-production workflow possible. All of this makes server-based recording an incredibly versatile tool. However, it is not without its limitations. At this time, network-based encoders are limited to encoding widely available intermediate or delivery codecs, such as Apple ProRes or Avid DNxHD. Without direct support from companies with their own proprietary formats, they cannot output in formats such as REDCODE or ARRIRAW. Furthermore, setting up a network of this nature requires persistent power and space. It is also worth considering that, like most new technologies, server-based recording often comes with a hefty price tag. These limitations make the process unsuited for productions hoping to take advantage of the full range of Red and Arri cameras, productions in remote or isolated locations, and low- budget productions. So when is it most appropriate or necessary to take advantage of this emerging technology? While it can be of use in a single-camera environment, this method of recording truly shines in live or archaically termed "live to tape" multi-cam environments, where anywhere from three to several dozen cameras are in use. After all, if a show records twelve cameras for one hour, the media manager suddenly has to juggle twelve hours' worth of content. It is much easier to write all twelve to a network-attached storage unit than to offload all twelve cards one by one. Also, due to the fact that network-attached storage drives can be configured to store hundreds of terabytes, the process is ideally suited for live events or sports broadcasts where stopping and starting the records risks missing key one time only moments. But above all, it is best used when time is critical. The ability to bring files into a nonlinear editing system as they are being recorded and work in real time is a game changer for media managers, producers, and editors alike. This technology is already revolutionizing the way television productions approach on-set media capture and it is still in its infancy. It will continue to grow and evolve. Given time, it is my sincere hope that it will find its way into the feature film market and become more practical for smaller productions to adopt. For the time being, Local 695 Video Engineers should begin to take note of what is available and familiarize themselves with the technology so that they are prepared to take advantage of the technology in the future.

Articles in this issue

Archives of this issue

view archives of Production Sound & Video - Fall 2018