The artificial intelligence security challenge is deepening as US defense and intelligence agencies race to adopt AI tools without the risk of leaking sensitive data.
The issue has gained attention following tensions between Anthropic and the Pentagon, highlighting how governments are struggling to balance innovation with privacy. As AI adoption expands, a new class of companies is stepping in to solve what experts call the AI ​​privacy problem.
Is data safe from AI?
The AI ​​security issue has created the need for specialized infrastructure providers. Such companies develop infrastructure that helps organizations use AI without disclosing any sensitive data. According to Nicolas Chaillon, founder of Ask Sage, the industry is currently valued at around $2 billion.
Companies like Amazon Web Services and Palantir offer secure cloud-based and software-based infrastructure for AI models. The contribution of these firms cannot be overemphasized as they enable defense agencies to use AI tools within their classified networks.
The main issue in the realm of AI security involves compromise. AI tools require huge amounts of data to function effectively; However, feeding them sensitive data increases the chances of breaches. Experts have indicated that the lack of adequate security could lead to the disclosure of sensitive information by AI tools.
According to Emily Harding, a researcher at the Center for Strategic and International Studies, there is a Catch-22 problem in this regard. Excessive data poses a security threat, while insufficient data compromises the efficiency of AI tools.
To solve the problem, companies are adopting approaches like Retrieval Augmented Generation, which allows AI models to access data without necessarily storing it. Such an approach works like a safe room, where data retrieval occurs only when needed.
Brian Raymond, CEO of Unstructured, noted that such an approach helps maintain strict access controls. Analysts can retrieve data only based on their task and avoid any leakage.
Additionally, the US Department of Defense has come up with its own AI platform, known as GenAI.mil, to promote the use of AI within various agencies. Despite efforts to promote the use of AI, the AI ​​security problem caused by GenAI has still not been solved. MIL only handles unclassified tasks.
