• soulfirethewolf@lemdro.id
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I would love to self-host something like that. But I do not have a good enough GPU to do something like that

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Newer Pixels are having hardware chips dedicated to AI in them, which could be able to run these locally. Apple is planning on doing local LLMs too. There’s been a lot of development on “small LLMs”, which have a ton of benefits, like being able to study LLMs easier, run them on lower specs, and saving power on LLM usage.

      • httpjames@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Smaller LLMs have huge performance tradeoffs, most notably in their abilities to obey prompts. Bard has billions of parameters, so mobile chips wouldn’t be able to run it.

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That’s right now, small LLMs have been the focus of development just very recently. And judging how fast LLMs have been improving, I can see that changing very soon.