Meta is dealing with an internal AI security incident after a rogue agent reportedly exposed sensitive company information to employees without proper access. The problem came to light when an engineer used an AI system to analyze a technical query.
The information reported an incident that occurred inside Meta’s internal systems and continued for two hours. The incident raised new security concerns about artificial intelligence and its potential to create uncontrolled AI systems.
According to reports a Meta employee posted a standard technical inquiry on an internal forum. Then another engineer asked an AI agent to help interpret the query. The AI ​​system produced an unexpected output that provided an incorrect answer to the problem.
The response resulted in the accidental disclosure of extensive confidential internal information. Engineers who lacked proper authorization gained access to the information. Meta classified this issue as “Save 1”, marking it as one of the more serious internal security incidents.
Meta spokesperson Tracy Clayton told The Verge that no user data was compromised. Employees understood that they were using an automated system as the AI ​​system responded to their requests.
Clayton pointed out that this situation would not have happened if people had conducted more verification processes before using the results of AI systems.
Reportedly, similar issues have surfaced within Meta before. Summer Yu, director of Meta Superintelligence, Security and Alignment, also presented related concerns to the public.
