• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Anyone using the code specific models, – how are you prompting them? Are you using any integration into vim emacs or other truly open source and offline text editor/IDE; not electron or proton based? I’ve compiled VS code before, but it is basically useless in that form, and the binary version sends network traffic like crazy.

    • z3rOR0ne@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      I’ve downloaded the 13B codellama from huggingface, passed it my NVIDIA 2070 via cuda, and have interfaced either through the terminal or lmstudio.

      Usually my prompts include the specific code block and a wordy explanation about what I’m trying to do.

      It’s okay, but it’s not as accurate as chatgpt, and tends to repeat itself a lot more.

      For editor integration, i just opted for codeium in neovim. It’s a pretty good alternative to copilot imho.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    I don’t like to sound like a broken clock, but all the llama models have restrictions on their use that mean they aren’t open source.

    • h3ndrik@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      And they don’t provide the source… So it’s neither open nor source. I get why and how Meta tries to make themselves look better. And I’m grateful for having access to such models. But I think words have meanings and journalists should do better than repeat that phrasing and help watering down the meaning of ‘open source’. (Which technically doesn’t mean free or without restrictions, but is often used synonymously.)

      • planish@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Don’t they provide the source for the code to actually run the model? Otherwise how are people loading it up and running it? Are they shipping executables along with model weights?