Anthropic has released Cloud Opus 4.7, an upgrade to its flagship model that accelerates the capabilities that developers rely on for the most heavy, autonomous coding, high-resolution image processing, and sustained performance across long, multi-session tasks.
The model is available today across the cloud’s full product suite of APIs, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry at an unchanged price of $5 per million input tokens and $25 per million output tokens.
Anthropic Cloud Opus 4.7
The major improvement is in software engineering. Early Access users report being able to delegate their most demanding coding work tasks that require close supervision at first, and trust Opus 4.7 to complete them with minimal hand-holding.
The model validates its own output before reporting back, a behavioral change that reduces the back-and-forth processing typically required on complex agentic runs.
Instruction-following has also been tightened significantly, and developers should take note: where earlier models interpreted prompts loosely or skipped steps, the Opus 4.7 takes instructions literally.
Anthropic recommends users to re-tune existing signals before migrating, as strict parsing can produce unexpected results from legacy system signals.
Opus 4.7 now accepts images up to 2,576 pixels on the long edge, approximately 3.75 megapixels, more than three times the maximum limit of previous cloud models. Tea
Hat jump is not cosmetic. This opens up practical use cases that were previously blocked by resolution limitations: computer-use agents reading dense UI screenshots, data extraction from complex diagrams, and any workflow that depends on pixel-level visual accuracy.
Because higher-resolution images consume more tokens, Anthropic notes that users who do not need the extra fidelity can downsample images before sending them to the model to manage costs.
Opus 4.7 is the first cloud model to carry Anthropic’s new cybersecurity guardrails, introduced under the company’s Project Glasswing framework.
Automated detection and blocking is activated for requests that indicate prohibited or high-risk cyber use and a deliberate decision to test these controls on less capable models before implementing them on more powerful Cloud Mythos Previews.
Security professionals with legitimate needs like penetration testing, vulnerability research, red-teaming, etc. can apply to Anthropic’s new cyber verification program to access models for those purposes.
Along with the model, Anthropic is also offering a number of supporting features. A new xhigh effort level sits between the existing high and max settings, giving developers better control over the logic-versus-latency tradeoff. Tasks are entering public beta on the Budget API, allowing developers to guide token spending in longer autonomous runs.
In Cloud Code, the new /ultrareview command runs a dedicated review session that flags bugs and design issues, just like a careful human reviewer. Pro and Max users get three free UltraReviews to test the feature.
