• CeeBee@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    9
    ·
    edit-2
    1 year ago

    That Thorn uses Amazon’s facial recognition tool is especially contentious. Research by MIT and the ACLU has shown that it falsely identified people of color,

    No, what the ACLU did was knowingly leave the default setting of 80% confidence and do a “surprised” face when they got almost exactly 20% false detections.

    They knew exactly what they were doing.

    Amazon even responded to their claims criticising the lack of proper setup for such a complex system. But the ACLU’s excuse was that it was the default setting, so they just used whatever it came with out of the box.

    So all they managed to prove is that the default setting isn’t adequate for accurate identification, and has nothing to do with what the system is able to do when correctly configured.

    Edit: I see I’m being downvoted for stating facts.

    • CmdrShepard@lemmy.one
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      1 year ago

      Is there any evidence to suggest that law enforcement agencies have these complex systems properly configured? I’ve seen plenty of articles talking about minorities being arrested after some facial recognition software misidentified them. Claiming that the ACLU isn’t using the software properly doesn’t mean that anyone else is using it properly.

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        1 year ago

        You’re talking about something else entirely. The ACLU’s argument is “these systems are so bad we can’t rely on them” and your argument is “law enforcement may not have them configured correctly”.

        One of those is factually false.

        That being said, every FR system is built differently, and have their own advantages and considerations. But from what I’ve seen in the news over the past few years is almost always a policy and procedure failure. At some point between using a photo of such low quality that it shouldn’t be used to the verifying officer looking at the source photo and recognizing the current suspect are different people, something broke down.

        I’m actually astonished at how bad the average person is at comparing photos of people. Just look up the conspiracy nonsense these flat earthers go on about regarding the Challenger accident. They are convinced that each person that died are actually still alive and living under a new name. Then they show their evidence and I couldn’t believe what I was seeing. Sure, these people are similar enough that they could fit a verbal description, but when you actually compare features it’s so easy to see they’re all different people and can’t be the same.

        I know it’s like that with some cops, because I know some people in emergency services that have been taking FR courses. They told me that so many departments (fire, police, 911 dispatch, forensics, etc) are being trained on it. And not for the software, it’s for physically identifying people. With this tech and these false arrests I guess it’s come to light that some people, cops or not, lack the fundamental ability to see minor but critical differences in facial anatomy.

        Ultimately, whatever a computer system says, a person is making the final decision to arrest these people. This is where the issue lies.

        Edit: I guess downvotes mean “I don’t like that you’re right” here also. I worked in the FR field for almost a decade. I’m familiar with the topic.