The Qwen3.5 models are still the best local models I’ve used, so I’m excited to see how this updated version performs.

  • TheCornCollector@piefed.zipOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    26 days ago

    I’ve been using it for the past few days and the output quality seems to be on par or slightly better than 3.5 27b. The biggest issue is the token usage that has exploded with this revision. It can easily reason for 20k-25k tokens on a question where the qwen3.5 models used 10k. Since it runs more than 3 times faster, it still finished earlier than the 27b, but I won’t have any context/vram left to ask multiple questions.

    Artificial Analysis has similar findings. Bar graph of output tokens for different models. Qwen3.6 35b: 140 million, Qwen3.5 35b: 100 million

    • FrankLaskey@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      27 days ago

      Yes I did see that as well. That does seem to be the real Achilles heel here. Will have to try it myself to see how much it exacerbates context size limitations given I would be running it on a single 24 GB VRAM GPU. I wonder if adjusting reasoning effort parameters could make a difference without affecting quality too much?