The hacking incident that occurred at Open AI was hidden for over a year
There were accusations that OpenAI was hacked early last year. The New York Times covered and reported what the company had been keeping quiet about. It appears that the hacker not only collected conversations between employees by infringing on the company’s messenger program but also stole OpenAI’s artificial intelligence-related technologies. This incident exposed not only the instability of the artificial intelligence ecosystem but also the laxity of Open AI.
Problem revealed 1
According to what the New York Times first reported, the hacker did not appear to have reached the information used to train artificial intelligence at the time. This means that the information itself, which may be very sensitive, is believed to be safe so far. This is the reason why OpenAI did not disclose the incident to the world. It is said that it hid the problem precisely because “it was determined that it was not an incident that would threaten national security.” Open AI is said to have reported that there was a breach only among executives in April 2023.
After the incident occurred, Leopold Aschenbrenner, OpenAI’s technical program manager at the time, sent a proposal to executives with the idea that damage should be prevented from future artificial intelligence technology. According to the New York Times, there was also criticism that “the company is not taking sufficient measures to prepare for attacks by the Chinese government and other foreign hostile forces.”
ChatGPT, OpenAI’s representative artificial intelligence program, is increasingly becoming more useful, and the number of users is also increasing at a rapid pace. It is also used in various research and experiments. Therefore, it is estimated that sensitive data accounts for a significant portion of the data that users enter into ChatGPT, and that it is a huge amount not only in terms of proportion but also in absolute quantity. However, OpenAI does not disclose how the data entered by the user is processed. Additionally, it does not clearly explain where and how the data used to train ChatGPT is supplied. This is not just a problem for Open AI, but all generative artificial intelligence developers are taking a similar attitude.
Therefore, when a data breach incident occurs at an artificial intelligence development company, these two types of data are automatically the first to be concerned. This is the data used to train artificial intelligence and the data entered when using artificial intelligence. Open AI’s explanation, “It is not an issue directly related to national security,” is therefore insufficient to provide an answer to these two concerns. The first problem is that the company, which is believed to have a lot of sensitive data, took the attitude of ‘there is no need to disclose the data breach because it is not a national security issue’ and kept it private for over a year.
![Launch Codes - Revenue Pulse (RP)](https://www.revenuepulse.com/wp-content/uploads/2023/12/joe-matt-launch-codes.jpg)
Problem revealed 2
Another Open AI problem revealed by this breach is related to the person named Aschenbrenner mentioned above. It is said that he delivered a letter to executives saying that the company should pay more attention to security and attached a proposal in the form of a proposal explaining what the company could do. But none of his suggestions were realized, and Aschenbrenner was fired earlier this year. He claims it was because Ashen Brenner pointed out that there were security problems at the company. However, Open AI’s position is that his proposal and dismissal were handled separately.
A spokesperson for OpenAI said in an interview with the media, “We appreciate Aschenbrenner’s dedicated attitude, but it is difficult to agree with his views on the security state of OpenAI.” He added, “The problems raised by the hacking incident have already been addressed.” He claimed, “This is a situation that was resolved through efforts from all sides.”
![Microsoft, OpenAI reveal ChatGPT use by state-sponsored hackers ...](https://image-optimizer.cyberriskalliance.com/unsafe/1920x0/https://files.scmagazine.com/wp-content/uploads/2023/07/0714_chatgpt.jpg)
So why was Aschenbrenner fired? The official reason is ‘information leak’. Regarding this, Aschenbrenner said this in a podcast on June 4: “Open AI said that I leaked information internally. So my colleagues and I asked them to explain what information was leaked and how. The answer I received from the company was that I had brainstormed to strengthen the safety and security of artificial intelligence last year, documented the process, and sought advice from three outside experts, which resulted in an information leak. “Even though I sent the document with all potentially sensitive content reduced and deleted.”
The arguments from both sides are still tense. The problem revealed here is that Open AI is very confusing internally. Security Week, a foreign news outlet, said, “It seems clear that Open AI is not operated in a harmonious atmosphere,” and “Many people have various philosophies and thoughts about a powerful new technology called artificial intelligence, and it is becoming apparent that they are in conflict.” wrote.
Things that come to mind
Open AI made headlines in all kinds of media last November. This is because news suddenly came out that CEO Sam Altman was fired. Executives who agreed to fire him said it was “because Altman is not communicating honestly and we can no longer trust him.” However, the situation took a turn when the entire company was in an uproar and almost all employees demanded, “Bring Altman back.” Within a week, Altman was reinstated, and the executives were replaced.
![OpenAI Fires CEO Sam Altman: Trust and Communication Issues Cited ...](https://www.cryptopolitan.com/wp-content/uploads/2023/11/DreamShaper_v7_OpenAI_Fires_CEO_-1.jpg)
At the same time, Altman emerged as a representative face of the artificial intelligence field, and belatedly, various data and testimonies about him began to be supplemented. The reason he had a disagreement with his executives was that they did not provide accurate information on very important issues such as developing and announcing Chat GPT and establishing safety measures. Simply put, he lied to push what he wanted and provided information much too late. It is said that these incidents have piled up and have reached a situation where trust is no longer possible. Even so, because the dismissal was not justified, his position as CEO remained unchanged.
It is known that the reason he deceived executives with lies was that his philosophy on artificial intelligence was different. The thought of most executives at the time of Open AI was that artificial intelligence could be dangerous and must be developed safely, and Altman tended to focus more on the idea that artificial intelligence should be quickly developed and commercialized to dominate the market. This was also confirmed by the testimony of Helen Toner, a former executive. Of course, Altman also says that he is deeply concerned about the security of artificial intelligence.
The feeling of discomfort left behind
Behind the provocative content that ‘Open AI suffered a hacking incident,’ many uncomfortable issues remain unresolved. The first is that it is still not disclosed exactly where artificial intelligence companies obtain the data to train their powerful models. The second is that no one knows exactly how the data entered by users is processed to use artificial intelligence models. Artificial intelligence developers are working together to find answers to these two problems.
![Winning, Defined? Text-Mining Arbitration Decisions | Cardozo Law ...](https://cardozolawreview.com/wp-content/uploads/2022/01/12_Table-1.png)
Therefore, although artificial intelligence is still an unfinished technology, it is already like a time bomb. It is clear that there is a huge amount of data behind the artificial intelligence services that are displayed on the front page, but no one knows how it is managed, and no one supervises it. Therefore, a company like OpenAI was able to arbitrarily judge the severity of the hacking incident and cover it up for over a year. This is the third discomfort.
A simpler way to express this third discomfort is that ‘private companies have too much power.’ Since there are no regulations or standards for the safety devices that artificial intelligence algorithms must have, the current situation for artificial intelligence users is that they have no choice but to trust and follow what developers have independently created.