|
Forum List
>
Café LA
>
Topic
Slightly OT: Cluster storage for Compressor?Posted by Jeff Harrell
I feel like an idiot for not knowing this, but I can't seem to figure out how to set up cluster storage for Compressor.
As I understand it ? and I've seen this happen in the past ? when you set up a Qmaster cluster for Compressor, the cluster controller is supposed to export a directory via NFS that the other nodes in the cluster will all mount, so they can all read from and write to the same files without having to copy segments to local caches first. Except it doesn't seem to be working correctly in my cluster. I've got an Xserve running Leopard Server and two Mac Pros, and I set them up so the server was running the cluster controller with eight instances, while the Pros were each providing four Compressor instances. The Pros aren't mounting the cluster storage like they should, which means segments have to be copied to them over the network, then copied back. The net result is that transcoding to ProRes via the 16-instance cluster is actually slower than just doing it on the 8-instance Xserve alone, because of the additional network overhead. Let me emphasize that this is not an Xsan. This is all just based around gigabit Ethernet. I suspect the problem might be related to Leopard Server. I haven't had this issue before, but I've never built a Compressor cluster around a Leopard Server controller, either. So first thing on my to-do list this morning is to move the controller to one of the Mac Pros and see if that makes any difference. Have any of you run into this before?
I dont think anyone has had an easy time with setting up a cluster but this might help
[www.macworld.com] There are other tuts out there too. Michael Horton -------------------
Thanks, Michael. But it turns out I was an idiot. The compute nodes in the cluster don't just mount the cluster storage once and leave it mounted; instead, they mount the cluster storage only when they receive a job from the controller, then unmount it when they're done. So in fact, my cluster is working perfectly. I just didn't know it, because I wasn't watching closely enough to see it mount, process and then unmount the cluster storage on the compute nodes.
Not quite. It's about 10% faster on the type of conversion we do most using the 16-instance configuration than the 8-instance, single-machine configuration. But after some tests this morning, I've pretty much come to the conclusion that we're I/O bound, of all things. The RAID on the Xserve is a real underperformer, only letting me sustain about 40 MB/s simultaneous read and write for transcoding.
Sorry, only registered users may post in this forum.
|
|