I’m kind of new to local AI and wondering what’s the move here? Are they trying to pull off a chrome/android situation? Obviously I don’t trust any of these gafam giants but I would be really interested in running a local LLM on my M1 max (briefly used deepseek last year). My use case would be mostly chat functions to help with academic and text analysis tasks (don’t worry I don’t just blindly trust LLMs, I know what I’m doing), so recommendations are welcome.

  • PetteriPano@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 days ago

    These gemma models are small enough to run on your laptop or phone. They’ll be bundling them with phones and Chromebooks anyway.

    Might as well get some goodwill and let the horses run free before someone extracts them from the edge device anyway.

    It can give them some plausible deniability if they just ship them off as tech-previews. I’m thinking for when that taking kitty app starts feeding into your delusions and telling you to hurt people.

    • altphoto@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Prophetic since a week later people found Google pushing AI secretly on Google Chrome. Hexavalent chrome is banned, why can’t chrome be banned?

      • PetteriPano@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        I’ll further prophesize that we’ll start seeing mixed on-device and cloud calling. Cloud for heavy thinking is probably in the books right now.

        Next week your local tiny gemma4 will be feeding the cloud models with predicted tokens to speed up and reduce work for gemini. It only has to get it right 66% of the time for a 2x speed-up.