At Microsoft Build 2025, Microsoft announced several AI content safety controls for Azure AI Foundry, including: |
|
Spotlighting. Enhances Microsoft Prompt Shields by detecting and blocking potential indirect prompt injections helping prevent unauthorized model actions. |
|
Real-time task adherence for agents. Tool assesses whether agent behavior is aligned with assigned tasks. |
|
Continuous evaluation and monitoring of agentic systems. Unified dashboard for tracking performance, quality, safety, and resource usage in real time. |
|
Evaluation tool integration for compliance management. Between Microsoft Purview, Credo AI and Saidot to help define risk parameters, run compliance evaluations. |
|
Microsoft Entra Agent ID. Gives security admins control, visibility over AI agents built in Azure AI Foundry. |
|
Microsoft Defender integration. AI security posture management recommendations and runtime threat protection alerts, running natively in Azure AI Foundry. |
|
Microsoft Purview extended to AI applications, agents. Data security compliance controls extended to custom AI applications and agents via Purview SDK or native integration with Azure AI Foundry. |
|