For anyone unaware, this is probably one of the better short and sweet explanations in regards to what HuggingFace is.
It is a hub for many code repositories hosting AI specific files and configurations, which has become a core ecosystem of many artificial intelligence breakthroughs, platforms, and applications.
FWIW, it’s a new term I am trying to coin in FOSS communities (Free, Open-Source Software communities). It’s a spin off of ‘FOSS’, but for AI.
There’s literally nothing wrong with FOSS as an acronym, I just wanted to use one more focused in regards to AI tech to set the right expectations for everything shared in /c/FOSAI
I felt it was a term worth coining given the varied requirements and dependancies AI/LLMs tend to have compared to typical FOSS stacks. Making this differentiation is important in some of the semantics these conversations carry.
Big brain moment.
Ironically, I think using this technology to do exactly that is one of its greatest strengths…
GL, HF!
Great suggestions! I’ve actually never interfaced with that first channel (SECourses). Looks like some solid tutorials. Definitely going to check that out. Thanks for sharing!
Lol, you had me in the first half not gonna lie. Well done, you almost fooled me!
Glad you had some fun! gpt4all is by far the easiest to get going with imo.
I suggest trying any of the GGML models if you haven’t already! They outperform almost every other model format at the moment.
If you’re looking for more models, TheBloke and KoboldAI are doing a ton for the community in this regard. Eric Hartford, too. Although TheBloke is typically the one who converts these into more accessible formats for the masses.
Thank you! Consider subscribing to /c/FOSAI if you want to stay in the loop with the latest and greatest news for AI.
This stuff is developing at breakneck speeds. Very excited to see what the landscape will look like for AI by the end of this year.
Absolutely! I’m having a blast launching /c/FOSAI over at Lemmy.world. I’ll do my best to consistently cross-post for everyone over here too!
After finally having a chance to test some of the new Llama-2 models, I think you’re right. There’s still some work to be done to get them tuned up… I’m going to dust off some of my notes and get a new index of those other popular gen-1 models out there later this week.
I’m very curious to try out some of these docker images, too. Thanks for sharing those! I’ll check them when I can. I could also make a post about them if you feel like featuring some of your work. Just let me know!