The MI25 is a great deal for hobbiest even if the power draw is high, but would it work with local models like falcon or llama?

I know they have a different memory bus size, but I’m unsure if this would fundamentally cause problems for open source models.

  • noneabove1182@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I guess it depends on what you mean by usable, I think people have had success with ROCM, it’s not as solid as CUDA of course but it’s been more than usable