A hacker breached OpenAI’s internal messaging systems early last year, stealing details of how OpenAI’s technologies work from employees. Although the hacker did not access the systems housing key AI technologies, the incident raised significant security concerns within the company. Furthermore, it even raised concerns about the U.S. national security, reports the New York Times.

The breach occurred in an online forum where employees discussed OpenAI’s latest technologies. While OpenAI’s systems, where the company keeps its training data, algorithms, results, and customer data, were not compromised, some sensitive information was exposed. In April 2023, OpenAI executives disclosed the incident to employees and the board but chose not to make it public. They reasoned that no customer or partner data was stolen and the hacker was likely an individual without government ties. But not everyone was happy with the decision.

  • circuscritic@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    5
    ·
    edit-2
    4 months ago

    Uh I’m pretty sure they offer trauma counseling to all those contractors.

    I mean, sure, it’s from a sub-contracted provider, TraumaBots (powered by ChatGPT 3.5), but I’ve heard only 36% of TraumaBot’s patients end up killing themselves.

    So… I’d call that a win/win.