Post Magazine

June 2013

Issue link: https://digital.copcomm.com/i/137897

Contents of this Issue

Navigation

Page 42 of 51

post positions Closed caption workflows L et's be perfectly honest about this... for post houses, captioning is a real pain. It is something you have to do because by law, your clients, who own the copyright to the material, have to do it: the FCC mandates it. According to the FCC's 21st Century Communications Act, which came into effect at the end of 2012, all video content that is broadcast on television in the United States with captions now also requires captions when it is distributed over Internet Protocol (IP). This includes all video content that is distributed on mobile smartphone apps, services like Hulu or Netflix, Websites, YouTube, Internetenabled televisions, DVD players, gaming consoles, etc. It was already a challenge when you simply had to finish a television show and make sure it had a set of captions. Now, you also have to provide accessibility wherever the content is going to end up: online, on mobile devices, and on different standards and frame rates of television. And because the CEA 608 standard used in North America, while similar, is not the same as that used elsewhere, you have to think about export versions too! In an ideal world, you shouldn't even have to worry about captions at any point in the workflow. All you need to know is that you have captions at the point of ingest; or if you have commissioned captions then you know the file has arrived and been added to the master asset. At the point of delivery you need to know that the correct version of the caption file is associated with each output and whatever happens along the way. Sounds simple enough, right? Well, consider this: currently we see 15 different input caption formats and 23 output (delivery) formats, multiply those by the number of frame rates and resolutions you are delivering, and you can easily get to many hundreds of different combinations and paths through the system. One way to manage this level of complexity is to handle captions the way you handle video and audio: ingest and transcode to a house standard, the mezzanine; track the captions with all the other elements needed to finish and package; and transcode to the output standard. While the theory sounds straightfor- ward, the implementation remains complex. I happen to believe that a complex workflow problem should not cause pain for everyone, except the people building the tools to solve the challenge. Thankfully, the entire procedure can be automated with intelligent processing, using a workflow platform, like the AmberFin iCR, for example. At the point of ingest, the system needs to search the incoming stream (the VBI on a tape, the ancillary data in a file) for caption informa- being on screen for long enough to be easily read. The workflow needs to have rules on when in and out points can be shifted to ensure readability, and when the re-edited video needs to go back to the caption author for changes. AmberFin works in partnership with specialist developer Softel (now part of Miranda) to incorporate intelligent processing of caption streams. What happens when you are creating a compliance edit, a cut, that includes some censorship changes? That might be for an overseas broadcaster, or it could be for an airline version. The right captions have to be removed, and if the dialogue is bleeped or redubbed, you want to make sure the bad language does not appear in the captions. Again, as much as possible we want to automate these operations. If you are preparing deliverables for international distribution as well as the home market, then you may well have to handle different caption files for each country (even English is different in the UK and Australia!) and One way to manage this level of complexity in terms of captioning the same level of intelligent is to handle captions the way you handle video and audio. processing will need to be applied to each file. You can tion. If it is there then, of course, preserve it appreciate the need for automation. within the mezzanine video file if it supports At the point of delivery the right caption captions, or in a sidecar file. If there are no cap- file is selected and inserted, either into the tions, then create a place-holder sidecar file, video stream or as ancillary timed data in the note in the metadata that captions are awaited, file wrapper. Done right, the client is happy, no and put it in the ingest report so the captions content is rejected and the FCC cannot can be chased from the supplier. threaten fines. Looking on the bright side, the business of I have talked about this challenge at a very authoring the captions is much easier in the high level, and that makes it sound simple. file world. No longer do you need to dub a For a service you have to provide but which VHS tape and FedEx it to the caption house; will never make anyone — post house or you simply email a link to the browse proxy. production company — any money, the idea Creating the captions during finishing, not that it can be delivered at a manageable cost afterwards, is a real time saver in production. and with little or no manual intervention Content only goes to a post house for one sounds good. reason: to be changed. If that content has capIn truth, captions need careful tracking tions, what happens to them when the video through the workflows to ensure the files is edited? The in and out timecodes for each remain accurate, consistent, and ready to caption will be changed, obviously, and that go when the content is published. The key, needs to be processed. then, is to use a platform that has the intelligence to be able to do all that manageRULES AND REGULATIONS ment and processing. Let technology do More seriously, it may result in captions not the heavy lifting. www.postmagazine.com By Bruce Devlin Chief Technology Officer AmberFin www.amberfin.com Basingstoke, UK How to automate the process. Post • June 2013 41

Articles in this issue

Links on this page

Archives of this issue

view archives of Post Magazine - June 2013