|
Forum List
>
THE ARCHIVES (These forums are READ ONLY)
>
Compressor - Media Compression and Conversion
>
Topic
Any reason to use ProRes over H.264?Posted by Robert Scheer
Hi all, first post.
I work on a Mac, and use Canon DSLRs (Mark IV and 5d MkII). I use FCP 7.0.2, which works with the H.264 right out of the camera, as you know. Would there be any reason to keep converting the files to ProRes 422LT (I use MPEG Streamclip), as I'd had to do with FCP 6? Thanks all! -B
H.264 is a heavily compressed format, and takes significantly more processor power to decode and re-encode. Also, you should use the L&T tool with the Canon EOS plugin. That allows you tap into the TOD data from the camera.
www.strypesinpost.com
You can use Mpeg Streamclip, just that it doesn't read the time and date metadata from the camera, and also, it will flag the track as either lower or upper field first, because "none" is not an option in Mpeg Streamclip. So it's really up to you, but personally I'd use L&T.
www.strypesinpost.com
No. ProRes HQ is a 10-bit, 220-megabit format. What comes off your camera is 8-bit, 50-megabit. It's the worst kind of overkill.
And 4444? That's twelve-bit. Complete waste of time. ProRes 422 LT will hold your frames with no visible difference, and still give you transparency across multiple recompressions. If you're really super-paranoid, go with ProRes 422 instead, which gives you an extra 45 megabits to play with.
I dunno, personally I'll skip LT, unless I'm short on drive space. I'll use ProRes, and go the HQ route if I'm sending it for heavy grading.
You should be able to do a batch recapture if you captured it with L&T. But I haven't tested that on the DSLRs, and I won't advice going offline/online, except to use it as a back up route. www.strypesinpost.com
Then just use LT. You'll never have any more latitude than what comes off the camera body, so any additional "latitude" you get from using a higher-data-rate format will be wasted. If you push any of your shots far enough to see the difference between LT and 422, then your footage will already have fallen apart.
As far as RAM sucker, the footage doesn't suck RAM. ProRes does take more processing power to encode and decode, but just switch your sequence render settings to 8 bit, and only go on high precision when you are rendering for final output. The default is set to render ProRes in float all the time, and that is a HUGE calculation process.
www.strypesinpost.com
>You'll never have any more latitude than what comes off the camera body
That's true, but the difference with working with AVC footage as opposed to other formats such as HDV, XDCAM EX, or DvcproHD, is that you aren't editing native. You are editing off an intermediate format. www.strypesinpost.com
True, but when it comes to intermediate formats, you aren't re-wrapping the data (eg. DvcproHD from P2 files, or XDCAM EX from the mp4 files), but transcoding the data. And your choice of codec depends on how much quality you want to retain from the source material, and also the ease of workflow (some softwares and plugins don't read ProRes). We transcode HDV and XDCAM, because it makes it easier for us to work with the footage, not because it makes the footage look better, and even if I capture HDV and XDCAM EX natively, they will always be rendered out to HQ for the final master, or at the very least, ProRes. But I'm obsessive compulsive about my choice of codecs.
ProRes 4444 is complete overkill, because you aren't using those extra chroma samples. They weren't even captured in the first place. But HQ, SQ, LT are all 10 bit codecs, just that LT is the most aggressive when it comes to compressing the data. www.strypesinpost.com Sorry, you do not have permission to post/reply in this forum.
|
|