![what settings to use eyeframe converter lightworks what settings to use eyeframe converter lightworks](https://www.videohelp.com/softwareimages/lightworks_1371-2.jpg)
- #What settings to use eyeframe converter lightworks how to#
- #What settings to use eyeframe converter lightworks tv#
This changes raw data into a standardized format that all editors can read, and it adds compression which makes the file smaller. It is inconvenient and error-prone for every editor to understand the raw format of every camera ever made, so the raw video is turned into an intermediate file like ProRes.
![what settings to use eyeframe converter lightworks what settings to use eyeframe converter lightworks](https://i.ytimg.com/vi/vaq5Zt4ZDAA/maxresdefault.jpg)
In a high-end studio, the camera will write a RAW video file that is nothing but complete and unprocessed sequential scrapes of the camera sensor data. The second goal of an intermediate file is to convert a video into a more “edit-friendly” format that an editor can process more quickly and ideally get the preview speed up to real-time. So, the first goal of intermediate files is to convert un-editable video (like variable frame rate from a cell phone) into a format that an editor can use without suffering any quality loss. But that’s 100 to 400 Mbps data rates straight out of a prosumer camera, and not the final 8 to 25 Mbps render that goes to YouTube or a Blu-ray disc. Lossy codecs, admittedly, can be a bit of a gray area because modern codecs like H.264, when given a high enough bitrate, can still be quite capable in post-production. There is not enough information left to do any color grading or correction in post-processing without gnarly banding effects or blockiness appearing. These files hold only enough color information to look “good enough” and no more. These codecs throw away data like there’s no tomorrow in order to get files as small as possible for final delivery to a customer (be that a YouTube viewer or a Blu-ray video disc). Lastly, there are “ Lossy” codecs like MPEG-2, H.264, and VP9.
#What settings to use eyeframe converter lightworks tv#
Hollywood studios and TV stations use them routinely, so don’t get too scared about the data loss. If you watched a mathematically lossless video and a visually lossless video played back-to-back, you would be unable to tell which was which. Next is a “ Visually Lossless” codec which means a little data is thrown away, but it’s the corners of the color chart that human vision can’t detect. The downside is that file sizes are massive because no data was thrown away. “ Mathematically Lossless” means your intermediate file will be a bit-for-bit perfect match to the original when both files are decompressed and compared. There are three levels of lossiness when it comes to video codecs. The transcoded file will be your intermediate file, and the goal for this intermediate is to be as true to the original as possible, because it stands in the place of your original file from now on. This file will need to be transcoded to a constant frame rate format in order to sync up nicely with other videos on a timeline. It will most likely be variable frame rate video, which editors cannot handle gracefully. Of your three video sources, only the cell phone falls into this category.
![what settings to use eyeframe converter lightworks what settings to use eyeframe converter lightworks](https://inlinepaceline.files.wordpress.com/2013/08/cyclecam-8.png)
The truest sense of the word “intermediate” means to create a replacement file because the original file is not usable at all in its current state. You mentioned three video sources: a GoPro, a Pen-F, and a cell phone. Many you’re on the right track, but I think these formats will make more sense if we take a closer look at the workflow. The files created as a result of the above conversion are significantly larger than the originals so I’m taking that as a good sign. (There was also an option checkbox to “Create files and folder structure for proxy editing” - I have no idea what proxy editing is so left this unchecked).Īm I on the right tracks? Is the intermediate file option I selected OK? Is my understanding of intermediate files correct? When I create an intermediate file I’m effectively uncompressing the file to work with in the editing process? It opens up editing options and is a better start point when it comes to compressing the final edit into a delivery format/file?Ĭreating the intermediate files… I used Eyeframe Converter and, in the Conversion Settings, set the format under the “Editing” tab to “Mpeg2 I-Frame HD - Proxy Quarter Size”.
#What settings to use eyeframe converter lightworks how to#
I’m just starting a new editing project and, as a novice, want to check my understanding of how to create and purpose of intermediate files.įirstly my understanding of purpose… The source files are (generally?) compressed (I assume to save space and processing power on the source device (camera, phone, Go Pro, etc).