Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says::The path to ethical AI is a “very important discussion” being held at DOD’s “very highest levels,” says service’s programs chief.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    8
    ·
    edit-2
    1 year ago

    tl;dr: The headline is false; the general did not actually say that. I thought it sounded wrong, so I watched the video that the article linked to, to check. Sure enough, it was wrong. However, the reality may not be any more reassuring.


    Hypothesis: Like, no, that’s obviously wrong; either the headline is trash or the general made a whole tossed salad with mango sauce out of whatever the people working on it said. (stated before further investigation; stay tuned)


    Updating: https://youtu.be/wn1yEovtYRM?t=3459


    Okay, wow.

    So the speaker is saying this at the end of the panel, in response to a question asking about the use of autonomous weapons.

    They want to talk about who’s trusted to make the decision of whether to employ lethal force in a combat situation: a human American soldier, who might be exhausted and not thinking clearly, or an algorithm that doesn’t get tired.

    And one thing they mention is that an enemy might not have ethics that would lead them them even care about that distinction. And they express that as “Judeo-Christian morality”.

    That doesn’t sit right with me. It sounds to me, in that moment, like they’re implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people. That sounds to me like the sort of bullshit that came out of the Vietnam War.

    But the rest of the answer sounds like they’re trying to point at the problem of making command decisions in scenarios where the opponent might deploy autonomous weapons first. If the enemy has already handed decision-making over to an algorithm, how does that affect what we should do?

    And they’re maybe expressing that to their expected audience — mind you, the Air Force is heavily infiltrated by far-right Christian radicals — in a way that they hope makes sense.


    Conclusion: The headline is incorrect; the general did not actually say that a Pentagon AI would be more ethical for any reason; he was talking about the human ethical decision of whether to trust AI to make decisions. But what he did say is complicated and scary for different reasons, including the internal culture of the US Air Force.

    • zoats@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      5
      ·
      1 year ago

      you could work for politifact with how dumb this attempt at “fact checking” was

      the headline is absolutely representative of what he implied and you are the one being misleading by claiming that it isn’t.

      • fubo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        1 year ago

        Folks can go watch it and see. No need to be a butt about it.

        • rekliner@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Don’t mind that turd. You took the time to do a thoughtful breakdown. It is a subtle nuance whether “Pentagon AI would be more ethical” or “AI managed by Pentagon staff would be used more ethically” and you were right to point it out. The headline could be accused of oversimplifying or clickbaiting but I don’t think it was intentionally falsifying claims. The real story, as you pointed out, is the sense of righteousness and declaring a moral high ground based on any religion.

          • fubo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I think the general’s point stands, though.

            No matter what ethical system you might be using (to decide whether to turn over control of a combat situation to AI), the enemy might be using a different one, and come to different conclusions; and that in turn affects what conclusions you should come to.

            This is actually a decision theory issue; and that’s something that military strategists do study.

    • brsrklf@compuverse.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      That doesn’t sit right with me. It sounds to me, in that moment, like they’re implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people.

      This is, unfortunately, how many, many very religious people think. And it’s not only insulting for everyone not following their beliefs, but also terrifying in my opinion.

      People who believe their god is the only thing that makes them moral aren’t really moral. Because then they never consider why it’s important to, you know, not be an asshole. It’s just compliance.

      And the terrifying part is that since their only frame of reference regarding what “good” is would be whatever their religion dictates, it’s always on the verge of breaking completely. You just need to listen to the wrong interpretation at the wrong moment in your life.