close
close

OpenAI accused of remaining silent on major security breach where hackers accessed internal messaging systems

The New York Times has just published a new report that discusses the creators of ChatGPT who suffered a massive security breach last year.

The report describes how OpenAI chose to remain silent about the matter and avoid any discussion to avoid negative publicity about the company. In addition, other details shed light on how internal email systems were hacked and sensitive credentials stolen.

Although management informed employees of what was happening, they were asked to remain silent and prevent the news from being disseminated to the general public and other law enforcement agencies.

Shocking information then emerged, showing how the hacker stole more data from conversations conducted via online forums linked to OpenAI employees. The discussions were highly confidential and concerned new technologies that were about to be launched soon.

It’s just very fortunate that the company has confirmed that hackers failed to break into the systems where multiple GPT models were being trained, as that would have been a serious problem for the company.

Meanwhile, two sources revealed that employees were already concerned about the number of similar attacks carried out in countries like China and that the goal seemed to be related to the theft of artificial intelligence technologies. Therefore, they expressed their opinion on how the problem could lead to a serious national security issue.

However, the response received from some employees regarding this was shocking, as it was almost as if the company did not care or consider security as its top priority, leading many to question the motive at play here.

A former tech executive shared a memo to the company’s board on the matter, explaining that the organization simply wasn’t doing enough to prevent such issues from occurring. This could cost the company dearly in the future, as the theft of confidential secrets or ideas by foreign malicious actors means serious consequences that OpenAI would have to face.

The news that OpenAI suffered such a breach and the fact that it led to disunity among the organization’s workers only shows what kind of problems the company faces on a daily basis.

Former manager Leopold Aschenbrenner gave more details. He recently alluded to the concern over these issues in a podcast appearance.

OpenAI terminated his contract when it was discovered that he had leaked data outside the organization, but he argued that this was not the case. Rather, the firing was politically motivated.

In the past, seeing OpenAI go through a series of disagreements related to superalignment and other means, such as the ouster of Sam Altman from his own company by the board, is really telling of what else could be happening that many of us are not aware of.

Other reports have shown how leading AI researchers have simply left the company because they believe that the board does not consider security as a priority and that AI could threaten the world, but nothing has been done to curb this alarming situation, with researchers questioning the end goal.

Artificial intelligence is modernizing and its capabilities are also increasing. This can have a significant impact on the future, although experts believe that the threat is not too great at present.

Image: DIW-Aigen

Read more: Samsung predicts 15-fold increase in second-quarter profits on AI boom