Let’s talk about our experiences working with different models, either known or lesser-known.
Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.
Which one is the “newer” one? Looking at the quantised releases by TheBloke, I only see one version of 30B WizardLM (in multiple formats/quantisation sizes, plus the unofficial uncensored version).