Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Pastor allegedly misused more than $2 million from church, dance school: ‘Personal piggy bank’

    May 8, 2026

    LeBron James reacts to JJ Redick’s viral insulting comments

    May 8, 2026

    The Middle East conflict is spreading – beyond the Strait of Hormuz and towards the UN cafeteria – Global Issues

    May 8, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Pastor allegedly misused more than $2 million from church, dance school: ‘Personal piggy bank’
    • LeBron James reacts to JJ Redick’s viral insulting comments
    • The Middle East conflict is spreading – beyond the Strait of Hormuz and towards the UN cafeteria – Global Issues
    • DC vs KKR-Today Match Prediction-IPL Match Today 2026-51st Match-Venue Details-Dream11-Toss Update-Who Will Win
    • Why Nicole Kidman avoided Hugh Jackman, Sutton Foster at the Met Gala
    • Russia urged to target British embassy in Kiev in scary warning world | news
    • Google is eliminating all these Fitbit features as part of Google Health changes
    • Third Briton dies of suspected rat virus on world’s most remote island amid race to control deadly disease
    Facebook X (Twitter) Instagram Pinterest
    Christian Corner
    • Home
    • Scriptures
    • Bible News
    • Bible Verse
    • Daily Bread
    • Prayers
    • Devotionals
    • Meditation
    Christian Corner
    Home»Bible News»Anthropic’s case against the Pentagon could open up space for AI regulation. business and economy news
    Bible News

    Anthropic’s case against the Pentagon could open up space for AI regulation. business and economy news

    adminBy adminMarch 25, 2026No Comments8 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Anthropic challenges US Pentagon's ban in San Francisco court business and economy news
    Share
    Facebook Twitter LinkedIn Pinterest Email

    San Francisco, United States: A California judge has set the stage for Anthropic’s potential victory over regulation of weapons powered by artificial intelligence, a loophole for the administration of United States President Donald Trump that brings the company one step closer to not losing billions in government contracts.

    The Trump administration designated Anthropic a “supply chain risk” for its stance on increased regulation, a move that would bar the company from certain military contracts.

    Recommended Stories

    4 item listend of list

    A district judge has ruled that the United States Department of Defense may seek to illegally punish Anthropic for attempting to restrict the use of its artificial intelligence (AI) models for weapons without human supervision or for mass surveillance.

    “This looks like an attempt to cripple Anthropic,” Northern California District Court Judge Rita Lynn said Tuesday.

    Legal analysts say this could set the stage for Anthropic to be granted a preliminary injunction preventing Anthropic from being labeled a supply chain risk by the Defense Department.

    “Their stated objectives are not fully supported by the War Department,” Charlie Bullock, senior research fellow at the Institute for Law and AI, a Boston-based think tank, said of the Defense Department’s designation of Anthropic as a supply chain risk.

    This is the first time that a US company has been so designated and would result in the cancellation of government contracts as well as government contractors.

    On March 17, the Defense Department told the court that Anthropic’s stance that its products would not be used for AI-powered weapons or for domestic surveillance without human oversight would undermine its “ability to regulate its own lawful operations.”

    Anthropic’s lawsuit to remove the designation is revealing about the limits of AI’s capabilities, how they can shape lives and whether they will be regulated.

    “This case is kind of a moment to consider what kind of relationship we want between government and companies and what rights citizens should have,” says Robert Traeger, co-director of Oxford University’s Oxford Martin AI Governance Initiative.

    Alison Taylor, clinical associate professor of business and society at New York University’s Stern School of Business, said, “In the US, technology is moving like a freight train and any idea of ​​human oversight is becoming increasingly difficult. But people are worried about AI-related job losses, data centers, surveillance and weaponry. That means public opinion is moving away from AI.”

    Over the past two weeks, several tech companies, think tanks and legal groups filed briefs in court in support of Anthropic’s stance, calling for oversight and regulation of AI for weapons and mass surveillance. This support ranges from employees at Microsoft and Anthropic’s competitors OpenAI and Google Inc. to Catholic moral theologians and ethicists.

    In their brief, OpenAI and Google DeepMind engineers, in their individual capacities, said the case is of “seismic importance to our industry” and that regulation is important because AI models’ “chains of reasoning are often hidden from their operators, and their inner workings are opaque even to their developers. And the decisions they make in lethal contexts are irreversible.”

    Against the backdrop of such concerns, NYU’s Taylor said, “Anthropic is making a risky but good bet that positioning itself as an ethical AI company will help it shape regulation when that happens.”

    hallucinations and other problems

    Anthropic has worked extensively on Pentagon contracts and its Cloud Gov model has been integrated into Palantir’s Project Maven, which helps with data analysis, target selection and other such tasks, including reportedly in the ongoing US-Israel war against Iran.

    While AI-powered weapons are not currently used without human supervision, Anthropic has called for continued human oversight in its contract with the Defense Department because, it says, AI models can hallucinate and are not yet completely reliable. While hallucinations are a concern in all AI models, the potential harm from the use of weapons could be massive.

    Mary Cummings, professor of civil engineering at George Mason University’s College of Engineering and Computing and director of the Mason Autonomy and Robotics Center, found that in San Francisco, where most such cars are deployed, half of all accidents involving self-driving cars were caused by the car mistakenly thinking an object was ahead of it and applying the brakes, causing the car behind it to crash.

    “We call it phantom breaking and it is caused by hallucinations,” he told Al Jazeera.

    In a February paper, they warned that, “Incorporating AI into weapons will face the same reliability issues as self-driving cars, including hallucinations.”

    “Hallucinations are not the only concern. Such models may have different workflows, data biases, or model biases. We don’t yet know how safe they are from foreign manipulation. There are a lot of pieces to it and we don’t yet agree on what we consider safe and what we don’t,” says Annika Schoen, an assistant professor who researches the impact of AI on health systems at Northeastern University’s Bouw College of Health Sciences.

    Given that AI models, including Claude Gou’s, are not created by the military, there is a need to test how reliable they are when integrating them into military systems, says Alok Mehta, director of the Wadhwani AI Center at the Center for Strategic and International Studies, a Washington DC-based think tank.

    “There may be delays in evaluation and benchmark testing. Models saturate the testing systems we have.”

    Others say that it is not so much the technology as the way it is used that can lead to errors.

    “I remember, in (early) 2020 there was an expectation that with such tools, civilian deaths would go down,” says Andrew Reddy, associate research professor at the Goldman School of Public Policy at the University of California, Berkeley, and founder of the Berkeley Risk and Security Lab.

    “But that’s not really the case because it depends on the data you feed. The challenge is not the AI-ness, but what is a valid target,” he says of how military personnel select targets from the range provided by the devices.

    Also on domestic mass surveillance, although it is not clear whether the Pentagon is currently using AI, OpenAI and Google researchers have outlined concerns over it in their court submissions.

    He says that more than 70 million cameras, credit card transaction history and other such data could be collected to monitor the entire US population. “Even the awareness that such a capability exists creates a chilling effect on democratic participation.”

    ‘Victory of public relations’

    Before the court case and amid growing public bitterness, Anthropic was said to have a closer relationship with the Pentagon than many of its competitors, and this had benefited both.

    “The Pentagon thinks Anthropic has the best product for military use, so it’s pressuring the company to continue using it,” says CSIS’s Mehta.

    For Anthropic, “the economics are very challenging for the AI ​​industry. So you need a strong public sector business with billions of dollars of contracts,” he says.

    OpenAI stepped in to replace Anthropic in working with the Pentagon shortly after Anthropic’s contract ended. But Anthropic appears to have won “on public relations, if not substance,” says NYU’s Taylor.

    Its status as an ethical AI company may have gained it public popularity. Cloud downloads increased rapidly in the weeks following the contract’s cancellation.

    But Brianna Rosen, executive director of the Oxford Program for Cyber ​​and Technology Policy, says drawing the lines at a company is a sign of the government’s failure to do so.

    “For the first time, the United States is using AI to prepare targets in large-scale combat operations in Iran,” she says. “And lawmakers are still debating whether to draw the red line on fully autonomous weapons. The absence of governance is a national security risk in itself.”

    The debate over regulation of AI weapons only widens the gap between public concern and reluctance to over-regulate AI innovation in other areas. Surveys have shown that Americans are concerned about potential job losses and climate change impacts from AI. An April 2025 poll from Quinnipiac University found that 69 percent of Americans thought the government could do more to regulate AI.

    This rift has led the AI ​​industry to emerge as a major donor in the 2026 midterm elections. Leading the Future, a super PAC that has received more than $100 million from OpenAI Chairman Greg Brockman, Palantir co-founder Joe Lonsdale and others, has funded ads against New York Assemblyman Alex Borsch, who is running for Congress. Bors sponsored the RAISE Act which would force AI developers to disclose safety protocols or accidents.

    In February, Anthropic announced a $20 million donation to Public First Action, a PAC that will support candidates in favor of AI regulation, including Bourse.

    The Institute for Law and AI’s Bullock says that while AI companies are looking to develop industry standards for testing and evaluating their models, Anthropic is pushing for regulation because bad actors could violate such non-binding standards.

    Between the court’s decision on Anthropic’s case and the upcoming midterm elections, experts say these events could determine the direction of AI regulation.

    “This could create space for more thoughtful policy development,” says Oxford’s Rosen.

    Anthropics Business case Economy news open Pentagon regulation Space
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Bible News

    Pastor allegedly misused more than $2 million from church, dance school: ‘Personal piggy bank’

    May 8, 2026
    Bible News

    The Middle East conflict is spreading – beyond the Strait of Hormuz and towards the UN cafeteria – Global Issues

    May 8, 2026
    Bible News

    Russia urged to target British embassy in Kiev in scary warning world | news

    May 8, 2026
    Bible News

    Third Briton dies of suspected rat virus on world’s most remote island amid race to control deadly disease

    May 8, 2026
    Bible News

    Donald Trump criticizes ‘lunatics’ in Iran on brink of ceasefire in Middle East world | news

    May 8, 2026
    Bible News

    3 Australian women returning from Syria face slavery and terror charges over alleged IS ties

    May 8, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Editor's Picks

    Christian college campus in Pace gets zoning board approval

    March 13, 2026

    Scientists discover a universal temperature curve that governs all life

    March 13, 2026

    In praise of hard work

    March 13, 2026

    AAUW Amador Branch Complaint and Coveration – Tuesday, March 24 | on the vine

    March 13, 2026
    Latest Posts

    Pastor allegedly misused more than $2 million from church, dance school: ‘Personal piggy bank’

    May 8, 2026

    LeBron James reacts to JJ Redick’s viral insulting comments

    May 8, 2026

    The Middle East conflict is spreading – beyond the Strait of Hormuz and towards the UN cafeteria – Global Issues

    May 8, 2026

    News

    • Bible News
    • Bible Verse
    • Daily Bread
    • Devotionals
    • Meditation

    CATEGORIES

    • Prayers
    • Scriptures
    • Bible News
    • Bible Verse
    • Daily Bread

    USEFUL LINK

    • About Us
    • Contact us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 christiancorner.us. Designed by Pro.
    • About Us
    • Contact us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.