• duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 year ago

      I’m a university professor who uses whisper.cpp for video lecture transcriptions, so I’ll chime in here. The thing about whisper.cpp compared to pretty well every other option is that whisper.cpp is really really really really really good. Like the accuracy is almost always completely 100% (and that’s just on the ‘medium’ model. The ‘large’ model is probably even better)

      There is only one problem with whisper that I’ve found, which is that if you use a low quantization model (I believe I’m using a 4-bit quantization model), whisper can get stuck into a “no punctuation mode” if that happens your transcription will suddenly start to look like this there will be no punctuation or capitalization it’s quite annoying once it gets into this mode it can’t get back out again

      The way to get around that is to segment your audio. I use ffmpeg’s silence detector to segment the audio whenever there’s a >1 second pause in the audio (so that I don’t accidentally segment in the middle of a sentence or the middle of a word). Break the audio up into roughly 10-minute segments and you should not see no-punctuation mode happening.

      The other nice thing about Whisper is it’ll tag fragments with confidence level and starting- and ending times. I use the confidence level so that I can quickly jump through low-confidence transcription points to see if it made a mistake (though it usually doesn’t). I use the starting- and ending times to automatically generate an .srt subtitle file. Then I use ffmpeg to bake in hardsubs for the students.

      So far it’s been working very smoothly and quickly. Even on my crappy old GTX1060, I can get subtitles at about 2-3x real time. And with almost no manual intervention.

    • joey@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Kdenlive apparently supports whisper. Check the link in other comment.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        From what I’ve heard they’re competitive for English but I’ve never used Deepspeech myself. Whisper has much more community support so it’s probably easier to use overall.

  • filister@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Whisper is pretty good and open source, you just need to write your own script to do the automation.

    And then you can also use some summarisation with OpenAI to create short summaries for each lecture or extract highlights or key points.

    You can then upload them to Obsidian to make them indexed and searchable and can use any of their plugins to make it even better.

    And you can use Syncthing to sync it to your phone.