|
Forum List
>
Café LA
>
Topic
Re: I have seen the future of editingPosted by strypes
Wow! This will be to editing what auto pitch correction is to modern recording.
[blogs.adobe.com] www.strypesinpost.com
I'm thinking of using this to cut the soundbytes, if it's possible. The dialogue in many cases form the base of the show. Being able to work directly off a transcript in the machine is incredible, because it's quicker to skim through text than to skim through videos.
www.strypesinpost.com
>But transcribed text where does that come from?
My guess is Adobe's transcribing tool that is in PPro/AME. Much to be improved, but if it can get up to 90% of the text right, or if we can get a transcriber to transcribe in this thing instead of in Word, it would be a very different workflow. As someone who has spent hours going through interviews and stringing them together, this could be very cool. Not just for the transcriptions, but because you can see what you are cutting. www.strypesinpost.com
Ok. Seeing the JNack entry has led me to re-open PPro and have a look again at the speech analysis feature in PPro, and now I'm not hoping for too much other than hidden transitions.
One of the big flaws with speech analysis right now is that it makes mistakes and you can't fix it. I mean, you can, but it's really clunky and you really don't want to fix transcriptions in an NLE. Secondly, speech analysis pops up as a tiny window in metadata, with tiny text. And it's linked mainly to the source monitor. So I can't go to a point in an interview in the timeline by clicking on a word in the metadata window. Good news is that at least I can see the text used in the clip, bad news is that I probably won't be using it because of what I mentioned in my earlier point. What I was hoping for with such technology, is that something can change in the highly inefficient world of transcripts. Right now, on some shows, transcribers will transcribe footage in Word, and the writers/story producers will copy chunks and TC references into their script in FD or Word, and that metadata goes nowhere. The editor then pieces the sound bytes together based on a print out of the script, and then he edits the show. Once that is done, someone, presumably transcribes the whole edited show once again for the broadcast transcripts for the network. That metadata which can be useful for editing is never imported into the NLEs to marry up to the footage. Just hoping that will change someday. It would be great if someday transcribers can fix the errors in speech analysis in maybe Prelude or Story, export an XML and sync that data up with the rushes and editors can use that data both in the timeline and in the source monitor in PPro, and export that data as the broadcast transcript once the show's edited. www.strypesinpost.com
> It would be great if someday transcribers can fix the errors in speech analysis in maybe Prelude or Story, export an XML and sync that data up with the rushes and
> editors can use that data both in the timeline and in the source monitor in PPro, and export that data as the broadcast transcript once the show's edited. And the XML can be saved as subtitle cards! Oh, yummy, yummy, yummy. www.derekmok.com
>And the XML can be saved as subtitle cards!
An XMP reader that can take the speech analysis data from just the portions used, and display it as text overlay would be very nice, but of course, ideally it takes XMP from the audio portion of the clip, not the video portion. Then it's all about formatting, or if they could set rules, like make a separate subtitle for every 3 seconds of video. That would reduce the time spent on formatting the subtitles. But we're at the stage where we can't even apply a global font change within the app, except with XML tools. I get you, Grafixjoe, this would be a big help to dialogue/interview heavy edits, not so much for effects heavy edits. www.strypesinpost.com
Nick Meyers Wrote:
------------------------------------------------------- > "if they could set rules, like make a separate > subtitle for every 3 seconds of video." > > i think given the amount of analysis going into > determining pauses, etc, > that would be very simple. In case they would able to make subtitles, current version CS6 can't do anything like that and implementing it will be quite a big deal. Andreas
this new tech seems pretty advanced.
i mean if they can analyse audio & video to make smooth edits, and ID dialogue PAUSES then they should be able to ID dialogue phrases that work syntactically or close to it. my next dream is that the transcription / subtitle info lives within some additional track in the video, or is permanently linked to it (like in FCPX) (and at the same time they are endlessly editable) so you can simply edit your program, and turn the subs on or off as simply as a timecode overlay. the overlays of course follow the audio track, not the video track, and are smart enough to know when an audio clip is disabled, or has it's levels set to zero. i have to admit something like this is more exciting to me that editing begin done by a computer. nick
Yea. Subtitle exchange is a bit clunky in PPro. Nothing I know of lets you import text into the title tool or export the data from the title tool into another software. I can't locate the text values in the PPro project file or in the FCP 7 XML. It's a nice title tool, but you can't send the information anywhere.
www.strypesinpost.com
>so you can simply edit your program, and turn the subs on or off as simply as a timecode overlay.
I gotta say, it seems like FCP X may have an edge here. I mean, that's essentially what roles is, or what it should grow into. FCP X seems to be built for metadata manipulation. Adobe does have a strong platform to build from with all these tools in development (Prelude, PPro, Story, Speech Analysis, XMP, etc..). But yea, a metadata reader that pulls data from selected audio tracks, or maybe a metadata reader clip object that lets you specify specifically which audio track to obtain data from. But it has to be editable. What I also hope for, is also the ability to export that data out from PPro. Eg. Chyron list, subtitle formats, etc.. And also as broadcast transcripts. www.strypesinpost.com
"Yea. Subtitle exchange is a bit clunky in PPro. Nothing I know of lets you import text into the title tool or export the data from the title tool into another software. I can't locate the text values in the PPro project file or in the FCP 7 XML. It's a nice title tool, but you can't send the information anywhere."
Gerard, What you said is both true and false. I was faced with this problem a long time ago and created a solution. I use(d) to script PhotoShop to create subtitles. It uses a STL file to control time and text. The app creates an EDL and a bunch of layered TIFF files to import intp PPro. These subtitles always can be edited from the timeline in PS (with PS you have more or less the same controls and options as with the Title tool inside PPro). Once you want to get your text out of PPro export the subtitle track, my app retrieves the text from the PS files again to an STL which can be converted to any other format again. -Andreas Some workflow tools for FCP [www.spherico.com] TitleExchange -- juggle titles within FCS, FCPX and many other apps. [www.spherico.com]
Nice. That's a workaround, but that's a very nice workaround to get subtitles into PPro.
www.strypesinpost.com
strypes Wrote:
------------------------------------------------------- > >so you can simply edit your program, and turn the > subs on or off as simply as a timecode overlay. > > I gotta say, it seems like FCP X may have an edge > here. I mean, that's essentially what roles is, or > what it should grow into. FCP X seems to be built > for metadata manipulation. Adobe does have a > strong platform to build from with all these tools > in development (Prelude, PPro, Story, Speech > Analysis, XMP, etc..). > > But yea, a metadata reader that pulls data from > selected audio tracks, or maybe a metadata reader > clip object that lets you specify specifically > which audio track to obtain data from. But it has > to be editable. > > What I also hope for, is also the ability to > export that data out from PPro. Eg. Chyron list, > subtitle formats, etc.. And also as broadcast > transcripts. FCPX has good handling of metadata inside events and projects but a pretty poor connection to the outside world. The amount of metadata which can be added from outside or inside the app are also very limited. So with subtitles, effect controls, notes etc can't be interchanged as well as for clips and the kind of metadata are also limited to Apple's way and don't allow custom metadata. Adobe's XMP does have a much wider range of metadata which can be applied and also allows custom data. But the metadata handling inside the is not as fast as with FCPX. PPro also doesn't allow a seamless exchange (forth/back) with other applications. But using a side car XMP file for each file makes it easier to get a limited support. Legacy FCP was pretty cool as you were able to add as many metadata to a file as you wanted, which also could be exchanged with third party apps and modified with the 3rd party app. The big problem was/is that these metadata (or most of them) weren't available in the UI. One of the biggest advantages of of FCP had been the option to control it from outside. So you could run FCP making changes in the edit or where ever and have a third party app running the same time to extract these changes in both editing and metadata and from there modify them inside the edit without doing an XML export to file and reimport the XML into FCP again - that's what I miss most. Both FCPX and PPro miss this invaluable feature. I wrote an email to Nick earlier today regarding subtitles and how to handle them effective with FCP. FCP keeps an UID and an "itemhistory" for each clip, generator etc. This allows to keep all changes you make externally linked to the UID instead of a timecode or clipname etc in the edit. Metadata are the way to go, but that needs both understanding and good tools - so I think we have to wait a while still. Andreas
The removing "ums" is treating the symptoms of poor language skills, rather than the disease- which is the use of correct diction in this day and age.
I saw an ad for Future Shop here on Canadian TV, I believe and one of the quotes in the ad was "I learned how to text before I could write" -I would not be bragging about this! So, the "umm" remover would not be needed if correct diction was still taught in school along with correct enunciation of one's words. just my 2/5th of a nickle (Canada got rid of the penny in 2012) <: strypes Wrote: ------------------------------------------------------- > Wow! This will be to editing what auto pitch > correction is to modern recording. > > [blogs.adobe.com] > h-to-help-video-editors.html
Dyslexia.
One can argue that autotune and the click track killed modern pop music. With it, it fixed a lot of perceived flaws in music, but it also killed a lot of rhythm/timing and pitch variations. www.strypesinpost.com
Sorry, only registered users may post in this forum.
|
|