WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???

Posted by kjoske 
WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 04:46PM
I have been waiting to purchase a new IMAC when they had snow leopard already installed in it and also I am going to purchase the latest FCP.

My question is: Will I be able to compress on the new FCP and my new IMac "any faster" than I am compressing on my old IMac and FCP Pro 5? I am using the regular compressor that comes with FCP Pro 5, I have not utilized the quicktime Compressor very much because it seems to me that the regular compressor will output a better video quality.

Right now it is taking me 18 hours or so ++++ to compress a simple 4 minute video with soundtrack. By simple I mean it is straight video without a lot of motion/animations added to it.
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 05:32PM
Not so much compared to a new machine and FCP7.

Noah

Final Cut Studio Training, featuring the HVX200, EX1, EX3, DVX100, DVDSP and Color at [www.callboxlive.com]!
Author, RED: The Ultimate Guide to Using the Revolutionary Camera available now at: [www.amazon.com].
Editors Store- Gifts and Gear for Editors: [www.editorsstore.com]
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 05:47PM
Didn't you post this already a few days ago? Please don't repeat-post.


www.derekmok.com
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 09:57PM
Noah,

My Mac is not an intel and I understand I have to have an intel computer (which the new Macs are) to run the new FCP7 software?

So you are saying that it is not a new computer but the new FCP7 software that will make
rendering faster? Is there anyway to make a guess as to "how much faster" the rendering will be? i.e. a 4 minute video can render in 7 hours instead of the 18 hours or so it is now taking me.
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 10:02PM
If your mac isn't an Intel, you can't run Snow Leopard either.

>Is there anyway to make a guess as to "how much faster" the rendering will be? i.e. a 4 minute
>video can render in 7 hours instead of the 18 hours or so it is now taking me.

I've seen a 10 hour standards conversion whittle down to around 2 hours on an 8 core. But that was on quick clusters in FCS2. I don't think FCP7 is multi threaded, so I would not expect much of a render boost from within FCP7. However, you do get faster processors, which would decrease render times by probably 10-20% depending on the machine you are moving to.



www.strypesinpost.com
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 10:35PM
Nope I said

"a new machine and FCP7"

Noah

Final Cut Studio Training, featuring the HVX200, EX1, EX3, DVX100, DVDSP and Color at [www.callboxlive.com]!
Author, RED: The Ultimate Guide to Using the Revolutionary Camera available now at: [www.amazon.com].
Editors Store- Gifts and Gear for Editors: [www.editorsstore.com]
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 16, 2009 11:19PM
This has been discussed...if you don't have 64-Bit Apps, it won't be faster...

...and PLEASE DON'T YELL BY TYPING ALL CAPS!!!

When life gives you dilemmas...make dilemmanade.

Re: Will Snow Leopard Make Compressing Video Faster???
September 17, 2009 04:48PM
Thank you all. Okay, you can tell I am learning I don't understand 64-Bit Apps - I think the new snow leopard comes with that.

With FCP7 and a "new machine" that has the following specifications will I be able to render and product video faster?

IMAC
3.06GHz Intel Core 2 Duo
4GB 1066MHz DDR3 SDRAM - 2x2GB
1.0TB Serial ATA Drive
ATI Radeon HD 4850 512MB

The computer comes with:
3.06GHz Intel Core 2 Duo
4GB memory
1TB hard drive1
8x double-layer SuperDrive
NVIDIA GeForce GT 130 with 512MB memory

and all you can really add to it is the ATI Radeon HD 4850 512 MB versus the NVIDA so I thought that may be what you are talking about and could make it run faster.

I can barely afford a new computer and I surely don't want to buy a new computer and/or FCP7 for that matter if I am not going to "decrease" my rendering time.
Re: Will Snow Leopard Make Compressing Video Faster???
September 17, 2009 04:55PM
Faster computer is faster.

How much faster? Who the hell knows? If it's taking you the better part of a day to encode four minutes of footage right now, you're clearly doing something very wrong, so it's impossible to predict how much benefit you're going to see from a new computer.

Here's a very important thing to remember: iMacs are not fast computers. They're not optimized for performance. They're designed with priorities other than raw computing power in mind. So no matter what you do, you're never going to get great performance out of an iMac, just like you'll never get great performance out of a MacBook Pro. Those of us who use laptops do so because we dig the things-other-than-raw-performance a laptop provides. We're aware of the compromises, and we embrace them.

If you're looking for balls-out computational power, buying an iMac is embracing the wrong set of compromises. It's not a high-performance processing engine, but it's also desk-bound.

If performance is your top priority, find a way to get your hands on a Mac Pro. The last-generation models ? the ones Apple calls "Early 2008" ? are great machines, and can be had for a reasonable price used or refurbished.

Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 17, 2009 07:12PM
..but the early 2009 Nehalem's are FASTER...just ask Barefeats grinning smiley

When life gives you dilemmas...make dilemmanade.

Re: Will Snow Leopard Make Compressing Video Faster???
September 17, 2009 07:22PM
Thanks, your right I realize that something is really wrong when it takes 18 to compress 4 minutes of video but I do not know what. I've been using my newer IMAC to compress video but its still a 3 year old IMAC.

I also have a PowerMac Tower but its an old computer.
Machine Name: Power Mac G5
Machine Model: PowerMac7,2
CPU Type: PowerPC 970 (2.2)
Number Of CPUs: 1
CPU Speed: 1.8 GHz
L2 Cache (per CPU): 512 KB
Memory: 512 MB
Bus Speed: 900 MHz
Boot ROM Version: 5.0.1f1

but it won't take snow leopard or FCP7
Re: Will Snow Leopard Make Compressing Video Faster???
September 17, 2009 09:18PM
What codec are you encoding ?

One thing to note as well is that when using compressor the apple tv HD preset encodes video much faster than the h.264 quicktime preset. One of the main differences in the speed is that one setting uses multi pass encoding and the other does not. On a virtual cluster it takes 4:27 minutes to encode a 5:13 second 2k clip (24p) with the apple TV HD preset. Going to a quicktime .mov file (2k source to 2k h.264) takes 4:51 minutes. I was getting similar times on leopard than I am on snow leopard on this particular early 2008 mac pro.

To really get a lot of benefits from snow leopard I think we need to wait for apple's final cut team to take advantage of OpenCL in compressor and other parts of final cut. Offloading come compression to the GPU can have a lot of speed gains
Re: Will Snow Leopard Make Compressing Video Faster???
September 17, 2009 10:09PM
Quote

To really get a lot of benefits from snow leopard I think we need to wait for apple's final cut team to take advantage of OpenCL in compressor and other parts of final cut.

I wouldn't get my hopes up too much about that, actually.

TL;DR version: Compressor, when correctly set up, basically already runs about as fast as it can on your computer. Enabling technologies like GCD and OpenCL are cool, but they can't squeeze blood from a stone.

Here's the thing about both OpenCL and the infinitely cooler Grand Central Dispatch on which OpenCL is built: Neither one of them lets your computer do anything it couldn't do before. They're both just enabling technologies that make it easier for developers to take advantage of the parallel-processing architecture already present in every modern Mac.

What GCD does is abstract away the mechanics of thread pooling, letting developers ignore everything except which inner loops can be run in parallel and which need exclusive access to instance variables and such. Along the way, it also takes the question of just how many tasks to execute in parallel away from the application developer (or, God forbid, the user) and lets the operating system handle that. The OS knows how many compute units are present in your system, whether it's two (in this laptop I'm using) or sixteen (in a 2009-vintage Mac Pro). Furthermore, the OS knows how much stuff is going on at the moment ? how many tasks are being performed by the system, how much memory is available ? so it can throttle back dynamically in order to keep things running smoothly.

But again, that's nothing your computer couldn't do before. In fact, Compressor already does all of it. Well, almost all of it. You've surely noticed how, in the Qmaster preferences pane, you have to tell Compressor how many tasks to run in parallel. In theory, GCD would render that setting unnecessary. In practice, you'd still want that setting for cluster-tuning reasons, but let's ignore that for right now. Point being, Compressor is already a finely tuned parallel application, so GCD doesn't really offer very much to it. Maybe a completely rewritten GCD-savvy Compressor would see a 5 or 10 percent performance gain if smarter context-switch management at the OS level leads to fewer L1 cache misses, but it wouldn't be a huge deal on the same hardware. The improvement, if any, would be within the margin of error.

As for OpenCL ? well, that's another kind of abstraction. Previously, if you wanted to use a graphics processing subsystem as a sort of general-purpose coprocessor, it was no big technical problem, just annoying and tedious. All you had to do was talk to the graphics card in the right way ? using OpenGL, usually ? and tell it what to do. Problem was, doing this efficiently meant having intimate knowledge of what graphics hardware was present and what capabilities it had. Could you write one program and know it would perform optimally on a MacBook Pro, a Mac Pro and a graphics-less Xserve? No, definitely not. On one of those it would be fine, on another it would be too slow, and on the third it would just crash.

OpenCL abstracts away the capabilities of the graphics hardware ? and, in fact, the existence of the graphics hardware. When you include an OpenCL kernel in your application ? essentially an inner loop that you want executed somewhere on the system, as fast as possible please ? the operating system is responsible for figuring out where that kernel can run, and where it should run given the relative capacity of GPU and CPU execution units available at the time.

Which is really neat. It means that if the developer goes to the trouble of abstracting away one of his inner loops into an OpenCL kernel, and if that kernel can be run more efficiently on the graphics hardware than on CPU hardware, then that kernel will run on the GPU.

That's two big ifs, though, and the first one is naturally dependent on the second. Say you're a developer working on an application, and you want to know whether it's worth the time and effort to write (or rewrite) it to include OpenCL support. This isn't a hard problem to solve; you just have to start thinking of the GPU as a general-purpose coprocessor, like the Tensor Processing Unit they used to put in big SGI systems at the beginning of this decade. Once the coprocessor has both the kernel you want to run and the data you want it to run it on, it can run it really fast without tying up any system resources like CPU or cache or main memory bandwidth. But first you have to give it the kernel you want to run, and then you have to give it the data you want it to run on. Which is why coprocessors are typically reserved for heavy, heavy math, like signal processing functions. You give them a modest amount of data, and they crunch on it like crazy, then they give you back the results.

Problem is, video encoding does not deal with a modest amount of data. This morning I had to make a quick DVD for a client, with about three minutes of footage on it. My master file, which I had to encode to MPEG-2 for the DVD, was over three gigabytes. And that's just for standard definition! Compared to the amount of data typically foisted off to a coprocessor, video is enormous, which means the overhead of having to shuttle that data down the PCI bus before calculation on it can begin is significant.

So for video encoding, OpenCL is unfortunately not an obvious big-win proposition. It's highly likely that a clever programmer might be able to turn out an OpenCL kernel for, say, H.264 encoding that shows significant performance increases on the GPU over a single CPU processing unit. But it'd be a lot of work, and that sort of ignores the fact that, well, sixteen CPU processing units is much faster than one.

Grand Central Dispatch is an awesome enabling technology that's of basically no benefit to a mature, highly tuned application like Compressor. And while OpenCL is neat, it's not a game-changer by any stretch of the imagination.

Not to mention the fact that it would be an astonishing amount of work to refactor Compressor into OpenCL kernels, only to get very modest performance improvements in only very specific situations. It just doesn't make a whole lot of business sense, really.

Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 18, 2009 01:01AM
isnt here a COMpRESSOR Subforum??
Re: Will Snow Leopard Make Compressing Video Faster???
September 20, 2009 12:39PM
Jeff Harrell, Larkis, NoahK, strypes

I am exporting to Compressor

then I am using the setting
DVD Best quality 90 minutes 16.9

then I go to All and
use MPEG-2 6.4 mbps 2-pass 16.9
Dolby 2.O

Since my original video is 1089i 16/9 I would like to use the HD setting to maintain the quality of my original video but the output is only "one unit" a movie and I am transferring my outputs to DVD Pro 4 which needs a separate movie and music (Dolby 2.0) files. I tried compressing to H264 which is what everyone says is the best format but my DVD Pro 4 will not input it. (Quoting Larkis -- when using compressor the apple tv HD preset encodes video much faster than the h.264 quicktime preset)

I am using Final Cut Studio which has FCP 5, Motion 2 and DVD Pro4

Hardware Overview:

Machine Name: iMac G5
Machine Model: PowerMac8,2
CPU Type: PowerPC G5 (3.0)
Number Of CPUs: 1
CPU Speed: 2 GHz
L2 Cache (per CPU): 512 KB
Memory: 512 MB
Bus Speed: 667 MHz
Boot ROM Version: 5.2.5f1
Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 20, 2009 02:58PM
Thanks Jeff for your very clear explanation about Grand Central Dispatch and OpenCL!

Piero
Re: Will Snow Leopard Make Compressing Video Faster???
September 20, 2009 07:15PM
I love it when you talk geeky like that Jeff ... do you have a sister?
Re: Will Snow Leopard Make Compressing Video Faster???
September 21, 2009 03:01PM
I was under the impression that OpenCL is similar to Nvidia's CUDA, and what adobe is using to accelerate some of their encoding with the elemental plugins. The speeds they seem to be getting out of it are a lot faster than what compressor seems to be able to do at the moment. While I realize some of it is marketing hype, even if the performance is half of what the marketing people claim, it's still impressive. [www.nvidia.com] (the adobe premiere link shows encode speed)

While I understand the I/O issue of getting data to the GPU and back, many calculations are slow enough on the CPU that getting them done on the GPU despite I/O latency is still a huge improvement. Optical flow retiming is ungodly slow right now while solutions like this [openvidia.sourceforge.net] that do use the gpu's look very compelling.
Re: Will Snow Leopard Make Compressing Video Faster???
September 21, 2009 04:54PM
Ehhhhhhh ? no.

CUDA was a one-trick pony. It was a software development kit that allowed developers to run general math routines on certain NVIDIA-brand graphics coprocessors. It worked okay, more or less, but it was extremely difficult to use, extremely difficult to optimize for and kind of pointless outside of HPC environments where the software developers and the hardware engineers worked elbow-to-elbow.

OpenCL is an open framework for heterogenous computing. Lemme explain what that means.

Think of your computer as a collection of "things that can do stuff." Back in the old days, if you needed to do something in your computer ? say add two numbers together ? there was exactly one place where that calculation could be performed: in the single CPU that was present in the computer. But nowadays, computer-makers know how to put multiple CPUs in one computer, and furthermore the chip-makers have gotten quite clever about how to put multiple "things that can do stuff" on a single CPU. As a result, that top-of-the-line Mac Pro by your feet has a total of sixteen "things that can do stuff" inside it. It's got two CPUs, each with four cores, each with two logical thread execution units, for a total of sixteen stuff-doing things.

A long, long time ago, we learned how to make computers with multiple stuff-doing-things in them. It was called "symmetric multiprocessing," and it was cool. A computer with two things-that-can-do-stuff inside it could do two things at exactly the same time, which made it twice as fast overall as one with only one thing-that-can-do-stuff. Well, mostly. There were issues of processor affinity and cache management and memory I/O bandwidth to deal with, but generally, symmetric multiprocessing was cool. We still use the crap out of it today. The laptop on my lap has two identical things-that-can-do-stuff, in a symmetric multiprocessing configuration.

Around about that same time, some bright folks got the idea of putting in dedicated coprocessors. A coprocessor is also a thing-that-can-do-stuff, but it's different from a CPU. Think of a coprocessor as being like your idiot-savant brother-in-law. He can't really take care of himself, can't drive a car, can't function in society the same way a regular person can, but holy crap can he do arithmetic. It's, like, mind-boggling how fast he can do arithmetic in his head. He can't do anything else, but he can do arithmetic like nobody's business.

That's kind of what a coprocessor is. It's a microscopic idiot savant inside a computer. It can only do one thing, but it does that one thing blisteringly fast.

Those of us in the audience who are old-timers will remember the Macintosh Quadra 840AV. It was the fastest pre-PowerPC Macintosh ever made, and it had a built-in digital signal processor that ran alongside the CPU and enabled it to do things it otherwise would not have been powerful enough to pull off, like real-time speech recognition. It had a little microscopic idiot savant.

Trouble was, hardly any applications took advantage of the DSP coprocessor in the 840AV. Why? Because it was a giant pain in the ass to program. The 840AV had two things-that-can-do-stuff, but hardly anybody ever used but one of them, because it was just too difficult to use them both together. This is how coprocessing has always been: tons of potential, too difficult to use in practice in all but the most unusual of circumstances.

Contrast this with symmetric multiprocessing, wherein the computer itself is responsible for handling the cooperative side of things, and applications need only provide more than one thing for the computer to do at once. Even in the absence of more than one thing to do at once per application ? in the absence of multithreaded processes, in other words ? a symmetric-multiprocessing computer will happily dole out the different applications running concurrently to the different things-that-can-do-stuff like a dealer handing out cards at a poker game.

That's why your Mac Pro at your feet has sixteen identical CPU-based execution units, and zero built-in coprocessors. It's just more practical to scale a system symmetrically than it is to add in and program a bunch of dedicated coprocessors.

But wait. Your Mac Pro does have a bunch of dedicated coprocessors! If you've got a GeForce GT 120 in there, for example, you've got thirty-two of them, running at 1.4 GHz each. What's more, it's utterly trivial to program for them; just use the OpenGL API to send data and instructions down the PCI Express bus to the card. Easy as pie.

Oh, if only we could treat those things-that-can-do-stuff the same way we treat the other things-that-can-do-stuff. We could do more things at one time, and thus get work done faster!

Well, CUDA lets you do that ? in the same way you could program the DSP in the Quadra 840AV. That is, it's possible, but it's a giant pain in the ass. And if the customer doesn't have that specific graphics card, well, they couldn't run your application at all, unless you took the time and trouble to write all your code twice. In the presence of a compatible coprocessor, your application would run the coprocessor version of your code; otherwise, it would run the entirely separate version you had to write to run on the built-in CPUs.

And the dirty little secret? "Compatible coprocessor" wasn't enough. See, NVIDIA wrote CUDA into a bunch of their graphics cards, but there's no way for the computer, itself, to know whether the graphics card could do the work faster than the CPUs. So your customer could easily find himself in a situation where his compatible graphics card from last year ran your application slower than it would have if you hadn't taken the time to use CUDA.

In short, CUDA was crap. It made for great SIGGRAPH demonstrations, but that's as far as it ever got in the commercial world.

OpenCL exists to address those shortcomings. See, OpenCL identifies and abstracts away all the various "compute units" (that's OpenCL's term for my much more eloquent "things that can do stuff"winking smiley in your customer's system. Each logical execution unit in each CPU is a "compute unit," and so is each processing unit on the graphics card. Because OpenCL kernels are compiled as needed at run time, any OpenCL application can run on any supported compute unit, which means you don't have to recompile and update your application every few months as new coprocessors become available.

And here's the truly awesome part: Nobody says those coprocessors have to be on your graphics card. If somebody comes up with an OpenCL-supported PCI Express card with vector coprocessors on it, or FPGA coprocessors, or whatever coprocessors, any OpenCL-savvy application will automatically run on that hardware, and the OpenCL runtime inside the OS will be responsible for deciding which compute unit gets which task, based on workload, availability and relative capacity.

This has, in fact, already started happening.

Well, sort of. You've heard of Red Rocket, yes? It's the PCI Express card you can buy from Red that'll decode and debayer Red footage at full resolution in real time or faster. It's a coprocessor board. Now, it's specifically not an OpenCL-compatible device, but there's no theoretical reason why it couldn't be. There might be practical reasons why it shouldn't be, but there's no reason why it couldn't be. That's exactly the sort of thing that OpenCL was designed to handle.

CUDA was an interesting technical demonstration with essentially no practical value, but OpenCL ? which was inspired by CUDA and similar efforts ? has the potential to open up a whole new world of scalability for deskside, and even portable, computing. Applications that are written to take advantage of OpenCL ? and remember, we're talking about future applications here; OpenCL requires refactoring and rewriting ? will be able to scale extremely well in the presence of OpenCL-supported coprocessors, whether on the graphics card or elsewhere in the system.

Re: WILL SNOW LEOPARD MAKING COMPRESSING VIDEO FASTER???
September 26, 2009 06:18PM
Phew!.

Thanks, Man.

Harry.
Sorry, only registered users may post in this forum.

Click here to login

 


Google
  Web lafcpug.org

Web Hosting by HermosawaveHermosawave Internet


Recycle computers and electronics