I am currently looking for a model that can run on my phone, it could be <8b or even <4b. It should have a reduced positivity/yes-man bias. I am at a point in my language learning journey where it’s more effective to learn a language through trying to actually construct a sentence (which is often through interaction) instead of just reading. Since there are times I am offline, a local LLM that is competent at multiple languages and decent at simulating characters texting would be a great help.

  • XiELEd@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Thanks! I can’t wait to try them. Though what differences do you find between Granite-4H and Qwen3-4B 2507?

    • SuspiciousCarrot78@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 hours ago

      Granite is much more straight laced. Qwen is more expressive. Honestly, it reminds me a lot of early days with GPT 4 class models (and the benchmarks show it about matches that, too).