• I think facial recognition technology is very different to threw diverse software. The fact that those technologies are trained on predominantly-white data is no surprise, both of your examples are data-based (ML models) where the data itself contains the bias.

    I am talking more of the open-source projects, it’s important; as you rightfully call out, that we have a varied group of opinions within the developer group 👍

    • eldavi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      17 hours ago

      it’s not just ai or training data for it; it’s developers themselves too, including fediverse ones.

      the biggest non-ai/training examples that i can think of came from times before ai was ever a thing like:

      • usps had difficulty validating addresses because the software they obtained assumed euro-centric naming schema

      • airlines, health care providers, hotels, and state motor vehicle departments rejected registration/reservations because trans people have to option to select their sex

      • health care providers misdosed patients because the software they used didn’t account for highly-athletic/bodybuilding people or people with chronic conditions

      there are SO MANY examples out there where the bias clearly comes from the developer instead of the training data and there’s no way that any piefed developer is immune or can even effectively mitigate their biases.