I have a bunch of old VHS tapes that I want to digitize. I have never digitized VHS tapes before. I picked up a generic HDMI capture card, and a generic composite to HDMI converter. Using both of those, I was planning on hooking a VCR up to a computer running OBS. Overall, I’m rather ignorant of the process. The main questions that I currently have are as follows:

  • What are the best practices for reducing the risk of damaging the tapes?
  • Are there any good steps to take to maximize video quality?
  • Is a TBC required (can it be done in software after digitization)?
  • Should I clean the VCR after every tape?
  • Should I clean every tape before digitization?
  • Should I have a separate VCR for the specific purpose of cleaning tapes?

Please let me know if you have any extra advice or recommendations at all beyond what I have mentioned. Any information at all is a big help.

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    So, uh… The EasyCAP device passes both fields into your PC but the video says that the driver does not interpret them correctly and uses probably the most common, incorrect deinterlacing method (see earlier comment with the method list). It is technically possible to reinterlace the video but I haven’t needed to do that, and you should do so before any lossy encoding to a file. I assume the community-written Linux driver has no such issue.

    The tutorial is mostly correct for people who want to create YouTube uploads with just one program (for YouTube, progressive video is required and the 480p stream cannot be 60 fps and has a terrible bitrate (and 576p for PAL is not available AT ALL so 1080p60 makes sense) but I strongly recommend not deinterlacing nor scaling in OBS, you can do that later. Record 480i (interlaced) files at very high bitrate and perform the deinterlacing in post.

    • KalciferOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      So, I bought an EasyCap device and ran some tests. I encountered a few things that I don’t quite understand, and I would really appreciate your input!

      I used a test VHS tape that I purchased at a thrift store (I’m not 100% sure if it’s NTSC or PAL, but I’m decently confident that it’s NTSC) (I’m also not sure what its aspect ratio is — I think it’s either 1.33:1 or 4:3). I’m playing the tape in a PV-D4745S-K VCR. I have the composite out of the VCR going into the aforementioned capture device which is connected to a computer running Arch Linux.

      First, I used the following ffmpeg capture settings:

      ffmpeg -i /dev/video2 out.mkv
      

      After capturing a short snippet of the test tape, I probed its metadata with ffprobe -i out.mkv, and saw that it was strangely in 25FPS, and 720x576 (which caused the video to be stretched vertically slightly), which is PAL. So, somehow the NTSC VHS being played in an NTSC VCR was being converted to PAL. In addition to that, the colors in the video were very overexposed. I tried a bunch of different manual settings like specifying interlacing with -vf "interlace", -standard ntsc, -vf scale=720:480, -vf fps=29.97, -standard NTSC, and none of them solved the issue. I then came across this answer on StackOverflow which stated that capture cards themselves have settings for the video feed, and ffmpeg can modify them with the -show_video_device_dialog true option. From the ffmpeg documentation:

      show_video_device_dialog

      If set to true, before capture starts, popup a display dialog to the end user, allowing them to change video filter properties and configurations manually. Note that for crossbar devices, adjusting values in this dialog may be needed at times to toggle between PAL (25 fps) and NTSC (29.97) input frame rates, sizes, interlacing, etc. Changing these values can enable different scan rates/frame rates and avoiding green bars at the bottom, flickering scan lines, etc. Note that with some devices, changing these properties can also affect future invocations (sets new defaults) until system reboot occurs.

      Unfortunately, when trying this option, an error popped up saying that the option was unrecognized. After some digging, and prompting ChatGPT, I found that apparently that option is Windows only as it relies on Windows’ “DirectShow system”. The way to modify it in Linux is to use the Video4Linux2 framework, which is controlled with v4l2-ctl. So, I ran the following:

      v4l2-ctl --device=/dev/video2 --list-formats-ext
      

      which showed the following entry:

      ...
      [0]: 'YUYV' (YUYV 4:2:2)
          size: Discrete 720x480
      ...
              Interval: Discrete 0.033s (30.000 fps)
      ...
      

      So it is able to output NTSC — ie 720x480 at 29.97fps (I guess it rounds up the fps for whatever reason). So I then tried

      ffmpeg -f v4l2 -video_size 720x480 -i /dev/video2 out.mkv
      

      and it was able to output the video at 720x480 29.97 fps as desired, and the colors were no longer super overexposed. Using the -vf "interlace" flag, I do seem to also be able to capture interlaced video, so it also doesn’t force progressive which is nice.

      I thought that the capture card would be able to just autodetect what the input resolution was to allow ffmpeg to record at that, or at the very least, I would expect that specifying NTSC in ffmpeg would force the standard, but neither of those worked and I’m not sure why. There’s also still an ongoing issue of the video being zoomed in/cropped slightly (I verified this by comparing against online sources of the same video (some were a VHS rip, others from non-VHS sources)). I tested the VCR’s output on a regular TV, but unfortunately the TV forced 4:3 and cropped it even more, so I wasn’t able to make a perfect comparison, though that was only additional horizontal cropping — the vertical cropping from before was still present. To be able to verify that, I’ll have to pick up another test VHS tape to see if perhaps the test VHS tape that I currently have was just recorded in a cropped format.

      • ChaoticNeutralCzech@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        With the interlace filter, make sure you get the field order right. I used not to be so familiar with ffmpeg and I ended up using some GUI program I can’t remember back in the day. See if the driver has an option for no deinterlacing because that happens at driver level.

        There is no difference between 1⅓:1 and 4:3, they’re just different representations of the same thing. Rounding the ratio to 1.33 produces a negligible difference but I would stick with 4:3 for a simpler pixel aspect ratio of 9:8 (1.125} as opposed to 150:133 (1.12782), assuming the capture is 720x480i60.

        As for the zoom, TVs will have some overscan because different equipment caused various borders but the capture card should capture all 480 lines. You can check that the output is not vertically scaled by taking a snapshot in a high-movement scene (beware that most image formats are limited to square pixels so better force a PAR of 1:1 for this purpose) and observing if the interlacing indeed causes 1:1 combing as expected. Checking for horizontal crop can be done with another video source (camera, DVD player, STB, game console) generating a test pattern or at least a known image. However, if the vertical scale is correct and the content aspect ratio looks subjectively fine at 4:3 SAR, the crop is most likely OK.