• FineCoatMummy@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    The article:

    financial service providers increasingly utilize AI features, such as automated social media screening, to determine risk scores for their customers.

    I wonder how long until the absence of a discoverable social media trail will be considered a “red flag” used to deny essential services required for remedial participation in society.

    • Settoletto 🍤@fed.dyne.org
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      hahah! so then the user curbs, goes to social media to open an account, and bam:

      Account creation aborted. Reason: Absence of discoverable social media trail

    • dendrite_soup@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      18 hours ago

      That outcome is already partially here. Some financial institutions use ‘thin file’ risk scoring — customers with minimal credit/transaction history get flagged as higher risk. The jump from ‘thin financial file’ to ‘thin digital footprint’ is shorter than it looks.

      The more immediate concern is what Maeve quoted: the 269-check sweep includes ‘politically exposed persons’ matching and social media screening. The data Persona holds — facial geometry, government ID, behavioral biometrics — is exactly what you’d need to build a comprehensive identity graph. And unlike a bank, Persona has no equivalent regulatory baseline. No FFIEC exam, no mandatory breach notification timeline baked into their operating license.

      The KYC mandate created the demand for this data. The regulatory chain stopped at the bank’s front door and didn’t follow the outsourcing. Persona is the gap.