cross-posted from: https://lemdro.id/post/11955

TL;DR

  • Google has updated its privacy policy.
  • The new policy adds that Google can use publically available data to train its AI products.
  • The way the policy is worded, it sounds as if the company is reserving the right to harvest and use data posted anywhere on the web.

You probably didn’t notice, but Google quietly updated its privacy policy over the weekend. While the wording of the policy is only slightly different from before, the change is enough to be concerning.

As discovered by Gizmodo, Google has updated its privacy policy. While there’s nothing particularly notable in most of the policy, one section now sticks out — the research and development section. That section explains how Google can use your information and now reads as:

Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public. For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.

Before the update, this section mentioned “for language models” instead of “AI models.” It also only mentioned Google Translate, where it now adds Bard and Cloud AI.

As the outlet points out, this is a peculiar clause for a company to add. The reason why it’s peculiar is that the way it’s worded makes it sound as if the tech giant reserves the right to harvest and use data from any part of the public internet. Usually, a policy such as this only discusses how the company will use data posted on its own services.

While most people likely realize that whatever they put online will be publicly available, this development opens up a new twist — use. It’s not just about others being able to see what you write online, but also about how that data will be used.

Bard, ChatGPT, Bing Chat, and other AI models that provide real-time information work by scraping information from the internet. The sourced information can often come from others’ intellectual property. Right now, there are lawsuits accusing these AI tools of theft, and there are likely to be more to come down the line.

  • agitatedpotato@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    Theres no situation in which I can envision AI scraping the open internet to be a good way to train them. Stop doing things cheaply and curate it yourself or you’re gonna get what you paid for, which in this case is mostly free trash content.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 year ago

      What’s going to be fun is when they start scraping their own output and it becomes a recursive nightmare.

      • agitatedpotato@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 year ago

        As if the internet isn’t full of regurgitated lies already, can’t wait for the AI to reinforce them into itself.

    • Zarxrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      They need massive amounts of data. There is simply no way to manually curate data on that scale, short of hiring like a million people. It’s very likely that they do use some sort of automated filtering to curate the data though.

      • Preston Maness ☭@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        They need massive amounts of data. There is simply no way to manually curate data on that scale, short of hiring like a million people. It’s very likely that they do use some sort of automated filtering to curate the data though.

        If we can throw tens of millions of soldiers into meat grinders for wars, then I think hiring a few million people to curate data is table stakes by comparison.