Unless the hardware has a sophisticated algorithm to blend and selectively blur the fields (eg. the Teranex), it's basically the same operation- repeat frame A over 3 fields, frame B over 2 fields, C over 3 fields, D over 2 fields, etc... In other words, every 2nd or 3rd frame, you split the fields and combine 2 images. This is a simple and lossless process if your input and output source is uncompressed. If you are working with compressed images, the software will have to render every second or third frame. When done via hardware, it is the same process, except that the signal is uncompressed (SDI).
Problem is that FCP uses a 2:2:2:4 pulldown (repeat last frame twice) when you drop a 23.98 file into a 29.97 timeline, which is stuttery because the pulldown is not as consistent as a 3:2 pulldown. Compressor can do a 3:2 pulldown, and so can After Effects or the free JES Deinterlacer. The Kona card does better up/down conversion than FCP, whereas optical flow in Compressor takes ages to render, and it often contains artifacts. Hardware or software, if the mathematical operation is exactly the same, the trade off is time. Hardware chips are programmed to do the calculation in real time, while softwares rely on the speed of your processor. For operations such as adding or removing pulldown, it is trivial, and machines are able to do the processing in real time.
Regarding jitters, jitters do not happen when you add pulldown to an image. They often happen when an inferior pulldown is used, such as a 2:2:2:4 or 2:3:3:2 pulldown (interlace 1 field in 5 frames), because these pulldown patterns are the least consistent. On the other hand, you may get jitters on fast objects when you deinterlace the image (i am not talking about removing pulldown), this happens when an image that requires a higher sample rate is under sampled.
www.strypesinpost.com