Microsoft is rolling out a significant shift in how organisations use AI across Microsoft 365, with the introduction of Anthropic AI models, including Claude, into Microsoft Online Services. As many organisations now evaluate How Can Disable Anthropic in light of these changes, it’s important to recognise that this update is part of Microsoft’s broader strategy to provide customers with multiple AI model choices — without compromising on security, compliance, or enterprise‑grade protections.
To support this rollout, Anthropic Models Enabled by Default reflects the fact that Anthropic has officially been added as a Microsoft subprocessor, meaning Microsoft now manages Anthropic under the same contractual, privacy and security safeguards applied to all other Microsoft AI providers. The older opt‑in method that relied on Anthropic’s separate commercial terms has now been fully deprecated.
For Australian organisations, especially those in government, critical infrastructure, healthcare, financial services, education and Essential Eight–focused environments — understanding and configuring this new model correctly is essential.
The details below explain the changes, their impact on Australia and how to disable Anthropic as a Microsoft subprocessor if required.
Why Anthropic Is Now a Microsoft Subprocessor
Microsoft has integrated Anthropic AI models into Microsoft Online Services to deliver secure, responsible and enterprise‑ready AI experiences. The change replaces Anthropic’s standalone opt‑in approach with a unified Microsoft-led model.
Under this arrangement-
- Anthropic now operates with Microsoft’s oversight
- Usage is governed by the Microsoft Product Terms
- Data protections fall under the Microsoft Data Protection Addendum (DPA)
- Enterprise Data Protection (EDP) safeguards apply
- The Microsoft Customer Copyright Commitment (CCC) extends to Anthropic models within Copilot and Copilot Studio
This alignment ensures that Australian organisations can rely on consistent rules and protections without needing separate vendor agreements or additional legal reviews.
Where Anthropic Is Enabled by Default
For most commercial cloud tenants globally, Microsoft will automatically turn Anthropic models ON by default, giving users instant access to Claude-powered features across:
- Microsoft 365 Copilot (web, desktop, mobile)
- Copilot Studio (model selection during agent creation)
- Researcher
- Agent Mode in Excel
- Word, Excel and PowerPoint agents
This includes Australian commercial customers unless your organisation operates under a regulated or sovereign-cloud requirement.
Regions Where Anthropic Is OFF by Default
Some regions require extra sovereignty guarantees. In these places, the Anthropic toggle appears, but it starts OFF by default:
- European Union (EU)
- European Free Trade Association (EFTA)
- United Kingdom (UK)
This is because Anthropic models are not yet included in the EU Data Boundary and may involve cross-border processing.
Australia does not fall under this default‑off category but Australian organisations with sovereignty concerns may still choose to disable Anthropic manually.
Regions Where Anthropic Is Unavailable
Anthropic is not available at all in-
- GCC
- GCC High
- DoD
- Other sovereign cloud regions
These cloud environments do not yet have the required certifications for Anthropic (e.g., FedRAMP).
Australian organisations using Microsoft Australia Sovereign Cloud are expected to follow similar restrictions if Anthropic becomes relevant.
Why This Considers for Australian Organisations
Many Australian entities operate under strict obligations, such as-
- Essential Eight Maturity Model
- ISM (Information Security Manual)
- PSPF (Protective Security Policy Framework)
- Notifiable Data Breaches scheme
- State and federal privacy laws
- Industry‑specific compliance requirements
Since Anthropic is not part of any Australian data boundary commitments yet, privacy, risk andsecurity leaders in Australia may choose to disable it until a full suitability and risk evaluation is completed.
ASD Blueprint Guidance for Government & Regulated Organisations
The Australian Signals Directorate (ASD) provides explicit guidance on Anthropic within the Blueprint for Secure Cloud, under the Microsoft 365 Copilot Settings → Data access baseline. The Blueprint references MC1193290, confirming that Anthropic Models Enabled by Default applies only to commercial tenants from 7 January 2026. Microsoft is proactively excluding government tenants due to sovereignty and regulatory requirements.
ASD clearly advises agencies to set Anthropic to DISABLED until a formal suitability and risk assessment is completed. This ensures that agencies do not unintentionally expose data to AI models that fall outside government‑approved sovereignty boundaries.
Here is the official ASD Blueprint reference- ASD Blueprint Data Access
How to Opt In to Anthropic (If Default is OFF)
If you are in a region where Anthropic is OFF by default or your Australian organisation intentionally disabled it earlier, you can turn it on manually.
Official Microsoft documentation on AI subprocessors: https://learn.microsoft.com/en-us/copilot/microsoft-365/connect-to-ai-subprocessor
Steps to enable Anthropic as a Microsoft subprocessor:
- Open Microsoft 365 Admin Center
- Navigate to Copilot → Settings
- Select Data access
- Choose AI providers operating as Microsoft subprocessors
- Select→ Enable Anthropic as a Microsoft subprocessor
Note:
If you previously opted in under Anthropic’s old commercial terms, you must opt in again, as that legacy toggle has been removed.
How to Disable Anthropic as a Subprocessor for Microsoft Online Services
If Anthropic is turned ON by default in your tenant, which will be the case for most Australian commercial tenants — you may want to disable it during internal risk reviews or compliance assessments.
You must be a Global Administrator to perform this action.
Steps to disable Anthropic in Microsoft 365-
- Open Microsoft 365 Admin Center
- Go to Copilot → Settings
- Select User access
- Click AI providers operating as Microsoft subprocessors
- Under Available subprocessors, choose→ Disable Anthropic as a Microsoft subprocessor
Once you disable Anthropic-
- Claude will no longer be available inside Copilot
- Researcher, Excel agents andCopilot Studio may lose certain capabilities
- Users will not see Anthropic options
- You can turn it back on anytime if your risk position changes
This is the new and only supported method of controlling Anthropic access.
Additional Controls for Copilot Studio (PPAC)
If Anthropic is enabled at the Microsoft 365 level, additional control switches appear in the Power Platform Admin Center (PPAC) allowing you to-
- Allow or block external LLMs
- Restrict Anthropic inside specific environments
- Control usage in Copilot Studio solutions
These controls are important for Australian organisations implementing Essential Eight–aligned governance or operating under sensitive security classifications.
Important Dates to Know
8 December 2025
- New Anthropic admin toggle appears in all eligible tenants.
- Enabled ON by default for most commercial regions.
7 January 2026
- Anthropic officially becomes a Microsoft subprocessor.
- Legacy opt‑in toggle is fully deprecated.
- If you had opted in under the old model, you must reapply the new toggle.
Final Thoughts – What Australian Organisations Should Do Now
For many Australian businesses, government organisations andregulated entities, enabling Anthropic immediately may not align with:
- Data sovereignty policies
- Privacy and security controls
- Essential Eight maturity requirements
- Industry accreditation or compliance frameworks
If in doubt, the safest and most responsible action is → Disable Anthropic as a Microsoft subprocessor until a formal risk assessment is completed.
This ensures you maintain compliance while evaluating Anthropic’s fit for your environment.
Need Help Disabling Anthropic or Reviewing Your AI Compliance Posture in Australia?
If your organisation needs guidance on-
- Disabling Anthropic safely
- Reviewing AI subprocessor risk
- Meeting Australian data sovereignty requirements
- Achieving Essential Eight maturity
- Configuring Microsoft 365 Copilot securely
- Developing an AI governance strategy
If you would like expert guidance on this setup, feel free to get in touch.

