• Gaywallet (they/it)@beehaw.orgOP
    link
    fedilink
    arrow-up
    14
    ·
    2 months ago

    While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.

    • leisesprecher@feddit.org
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        I have a feeling that’s the point with a lot of their use cases, like RealPage.

        It’s not a criminal act when an AI did it! (Except it is and should be.)

      • Gaywallet (they/it)@beehaw.orgOP
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.

        In my mind, this is the whole purpose of regulation. A strong governing body can put in restrictions to ensure people follow the relevant standards. Environmental protection agencies, for example, help ensure that people who understand waste are involved in corporate production processes. Regulation around AI implementation and transparency could enforce that people think about these or that it at the very least goes through a proper review process. Think international review boards for academic studies, but applied to the implementation or design of AI.

        I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to

        AI ethics is a field which very much exists- there are plenty of ways to measure and define how racist or biased a model is. The comparison groups are typically other demographics… such as in this article, where they compare AAE to standard English.