You're seeing a combination of two effects at once. First, DVCPRO HD just doesn't have very much latitude. It's an 8-bit format, which means it can only resolve a couple hundred different levels of luminance. Normally you can get away with that, thanks mostly to the chroma noise coming out of the sensor, which kinda-sorta looks like film grain. But take away the chroma noise ? by, in this case, taking away the chroma entirely ? and woah, that's only eight bits.
The other problem is H.264 compression. H.264 works by throwing away resolution where the encoder thinks you don't need it. Basically it looks for areas without high-frequency detail and makes them lower-resolution. Once again, you can get away with it most of the time, because the H.264 algorithm is really good at guessing what's important and what's not, and hiding reduced luminance resolution behind chroma. Except there's no chroma here, so H.264 can't hide.
There's really nothing you can do to fix this, per se. Both of these effects are intrinsic properties of the two formats you used, DVCPRO HD and H.264. One was your acquisition format and the other is your delivery format, and there's no getting away from either.
But you can try to game the system. If it were me, I'd try doing a grain pass on my footage before I color-correct it. My tool of choice for that would be Shake, because I
love its film-grain node, but you can get not-as-good-but-similar results in After Effects. I'm not aware of a good film-grain plugin for Final Cut, but I wouldn't be surprised if one exists.
Adding grain is a really good way to minimize visible banding. For instance, if you shoot a grainless, high-bit-depth format like Red, but have to deliver in an eight-bit format, adding grain before you output can help smooth out areas that would otherwise band on you, like a clear blue sky behind an actor. It's also pretty much mandatory whenever you work with motion graphics that include subtle luminance gradations.