• a1studmuffin@aussie.zone
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    This article seems misleading. It uses the loaded Western term “selfie” to generate these images of different cultures smiling. If you use the term “group photo” instead, you get much more natural looking results, where certain cultures are smiling and others aren’t.

    • u_tamtam@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Isn’t that the essence of the issue, that those models are loaded with biases, that might or might not overlap with dominant ones in inscrutable ways, hence producing new levels of confusion and indirection?

  • qprimed@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 months ago

    just when you are sure this article is going to fluff out on you, it doesn’t.

    But how does AI tell when someone is most likely lying? They’re smiling like an American.

    I was oddly surprised at how I connected with this article. a useful read in a defining epoch.

  • GBU_28@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    The content about the reasons for different smiles is cool. And the highlighting of the training data influencing things is also good stuff.

    But as far as realistic image generation based on culturally relatable smiling sounds like a skill issue. You can’t just generate images about specific times or settings or people with “defaults”. You have to specify your prompt.