Governance and Security |
Threat Protection |
Implement threat protection for all AI models. |
Microsoft Defender for Cloud |
π΄ High |
link |
Governance and Security |
Threat Protection |
Regularly inspect AI model output to detect and mitigate risks associated with malicious or unpredictable user prompts. |
Azure AI Content Safety |
π΄ High |
link |
Governance and Security |
Threat Protection |
Establish company-wide verification mechanisms to ensure all AI models in use are legitimate and secure. |
NA |
π΄ High |
Β |
Governance and Security |
Access Management |
Use distinct workspaces to organize and manage AI artifacts like datasets, models, and experiments. |
Azure AI Foundry |
π΄ High |
link |
Governance and Security |
Risk Mitigation |
Use MITRE ATLAS, OWASP Machine Learning risk, and OWASP Generative AI risk to regularly evaluate risks across all AI workloads. |
NA |
π‘ Medium |
link |
Governance and Security |
Risk Mitigation |
Assess insider risk to sensitive data, across all AI workloads |
Microsoft Purview |
π΅ Low |
link |
Governance and Security |
Risk Mitigation |
Perform AI threat modeling using frameworks like STRIDE to assess potential attack vectors for all AI workloads. |
Microsoft Threat Modeling Tool |
π‘ Medium |
link |
Governance and Security |
Risk Mitigation |
Conduct red-team testing against generative AI models and nongenerative models to assess their vulnerability to attacks. |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Risk Mitigation |
Maintaining a detailed and up-to-date inventory of your AI workload resources |
Microsoft Defender for Cloud |
π΄ High |
link |
Governance and Security |
Risk Mitigation |
Create a data sensitivity change management plan. Track data sensitivity levels as they can change over time. |
NA |
π‘ Medium |
Β |
Governance and Security |
Risk Mitigation |
Safeguard sensitive data when required by using duplicates, local copies, or subsets that contain only the necessary information. |
NA |
π΄ High |
Β |
Governance and Security |
Risk Mitigation |
Conduct rigorous tests to determine if sensitive data can be leaked or coerced through AI systems. |
Azure AI Services |
π΄ High |
link |
Governance and Security |
Risk Mitigation |
Provide AI-focused employee training and awareness emphasizing the importance of data security and AI development best practices and deployment. |
NA |
π‘ Medium |
Β |
Governance and Security |
Risk Mitigation |
Develop and maintain an incident response plan for AI security incidents. |
NA |
π΄ High |
Β |
Governance and Security |
Risk Mitigation |
Regularly evaluate emerging threats and vulnerabilities specific to AI through risk assessments and impact analyses. |
NA |
π΄ High |
Β |
Governance and Security |
Risk Mitigation |
Enforce Customer Managed Keys for data at rest encryption via Azure Policy |
NA |
π‘ Medium |
link |
Governance and Security |
Risk Mitigation |
Disable inferencing via Azure AI Foundry to prevent API Gateway bypass. |
Azure AI Foundry |
π‘ Medium |
Β |
Governance and Security |
Operations |
Use tools like Defender for Cloud to discover Gen AI workloads and explore AI artifacts risks such as vulnerable images & code repositories. |
Microsoft Defender for Cloud |
π΄ High |
link |
Governance and Security |
Operations |
Use Azure AI Content Safety to define a baseline content filter for your approved AI models. |
Azure AI Content Safety |
π΄ High |
link |
Governance and Security |
Operations |
Test the effectiveness of grounding by using tools like prompt flow. |
Azure AI Foundry |
π‘ Medium |
link |
Governance and Security |
Operations |
Enable recommended alert rules to receive notifications of deviations that indicate a decline in workload health. |
Azure AI Search |
π΄ High |
link |
Governance and Security |
Operations |
Use Azure Policy to control which services can be provisioned at the subscription/management group level. |
Microsoft cloud security benchmark |
π‘ Medium |
link |
Governance and Security |
Security |
Limit client access to your AI service by enforcing security protocols like network controls, keys, and role-based access control (RBAC). |
Azure AI Services |
π΄ High |
link |
Governance and Security |
Compliance |
Use Microsoft Purview Compliance Manager to assess and manage compliance across cloud environments. |
Microsoft Purview |
π‘ Medium |
link |
Governance and Security |
Compliance |
Use standards, such as ISO/IEC 23053:2022 to audit policies that are applied to your AI workloads. |
NA |
π΄ High |
Β |
Governance and Security |
Data Classification |
Use a tool like Microsoft Purview to implement a unified data catalog and classification system across your organization. |
Microsoft Purview |
π‘ Medium |
link |
Governance and Security |
Data Classification |
Ensure that any data ingested into AI models is classified and vetted according to centralized standards. |
NA |
π‘ Medium |
Β |
Governance and Security |
Data Classification |
Use a content filtering system like Protected material detection in Azure AI Content Safety to filter out copyrighted material. |
Azure AI Content Safety |
π΄ High |
link |
Governance and Security |
Authentication |
Use Microsoft Entra Authentication with Managed Identity instead of API Key |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Data Sensitivity |
Classify data and sensitivity, labeling with Microsoft Purview before generating the embeddings and make sure to treat the embeddings generated with same sensitivity and classification |
Azure OpenAI |
π΅ Low |
link |
Governance and Security |
Encryption at Rest |
Encrypt data used for RAG with SSE/Disk encryption with optional BYOK |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Transit Encryption |
Ensure TLS is enforced for data in transit across data sources, AI search used for Retrieval-Augmented Generation (RAG) and LLM communication |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Access Control |
Use RBAC to manage access to Azure OpenAI services. Assign appropriate permissions to users and restrict access based on their roles and responsibilities |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Data Masking and Redaction |
Implement data encryption, masking or redaction techniques to hide sensitive data or replace it with obfuscated values in non-production environments or when sharing data for testing or troubleshooting purposes |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Threat Detection and Monitoring |
Utilize Azure Defender to detect and respond to security threats and set up monitoring and alerting mechanisms to identify suspicious activities or breaches. Leverage Azure Sentinel for advanced threat detection and response |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Data Retention and Disposal |
Establish data retention and disposal policies to adhere to compliance regulations. Implement secure deletion methods for data that is no longer required and maintain an audit trail of data retention and disposal activities |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Data Privacy and Compliance |
Ensure compliance with relevant data protection regulations, such as GDPR or HIPAA, by implementing privacy controls and obtaining necessary consents or permissions for data processing activities. |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Employee Awareness and Training |
Educate your employees about data security best practices, the importance of handling data securely, and potential risks associated with data breaches. Encourage them to follow data security protocols diligently. |
Azure OpenAI |
π‘ Medium |
Β |
Governance and Security |
Environment segregation |
Keep production data separate from development and testing data. Only use real sensitive data in production and utilize anonymized or synthetic data in development and test environments. |
Azure OpenAI |
π΄ High |
Β |
Governance and Security |
Index Segregation |
If you have varying levels of data sensitivity, consider creating separate indexes for each level. For instance, you could have one index for general data and another for sensitive data, each governed by different access protocols |
Azure OpenAI |
π‘ Medium |
Β |
Governance and Security |
Sensitive Data in Separate Instances |
Take segregation a step further by placing sensitive datasets in different instances of the service. Each instance can be controlled with its own specific set of RBAC policies |
Azure OpenAI |
π‘ Medium |
Β |
Governance and Security |
Embedding and Vector handling |
Recognize that embeddings and vectors generated from sensitive information are themselves sensitive. This data should be afforded the same protective measures as the source material |
Azure OpenAI |
π΄ High |
Β |
Governance and Security |
Access control |
Apply RBAC to the data stores having embeddings and vectors and scope access based on roleβs access requirements |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Network security |
Configure private endpoint for AI services to restrict service access within your network |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Network security |
Enforce strict inbound and outbound traffic control with Azure Firewall and UDRs and limit the external integration points |
Azure OpenAI |
π΄ High |
Β |
Governance and Security |
Control Network Access |
Implement network segmentation and access controls to restrict access to the LLM application only to authorized users and systems and prevent lateral movement |
Azure OpenAI |
π΄ High |
Β |
Governance and Security |
Secure APIs and Endpoints |
Ensure that APIs and endpoints used by the LLM application are properly secured with authentication and authorization mechanisms, such as Managed identities, API keys or OAuth, to prevent unauthorized access. |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Implement Strong Authentication |
Enforce strong end user authentication mechanisms, such as multi-factor authentication, to prevent unauthorized access to the LLM application and associated network resources |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Use Network Monitoring |
Implement network monitoring tools to detect and analyze network traffic for any suspicious or malicious activities. Enable logging to capture network events and facilitate forensic analysis in case of security incidents |
Azure OpenAI |
π‘ Medium |
Β |
Governance and Security |
Security Audits and Penetration Testing |
Conduct security audits and penetration testing to identify and address any network security weaknesses or vulnerabilities in the LLM applicationβs network infrastructure |
Azure OpenAI |
π‘ Medium |
Β |
Governance and Security |
Infrastructure Deployment |
Azure AI Services are properly tagged for better management |
Azure OpenAI |
π΅ Low |
link |
Governance and Security |
Infrastructure Deployment |
Azure AI Service accounts follows organizational naming conventions |
Azure OpenAI |
π΅ Low |
link |
Governance and Security |
Diagnostics Logging |
Diagnostic logs in Azure AI services resources should be enabled |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Secure Key Management |
Store and manage keys securely using Azure Key Vault. Avoid hard-coding or embedding sensitive keys within your LLM applicationβs code and retrieve them securely from Azure Key Vault using managed identities |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Key Rotation and Expiration |
Regularly rotate and expire keys stored in Azure Key Vault to minimize the risk of unauthorized access. |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Secure coding practice |
Follow secure coding practices to prevent common vulnerabilities such as injection attacks, cross-site scripting (XSS), or security misconfigurations |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Patching and updates |
Setup a process to regularly update and patch the LLM libraries and other system components |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Security Audits and Penetration Testing |
Red team your GenAI applications |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Key Management |
Use customer-managed keys for fine-tuned models and training data thatβs uploaded to Azure OpenAI |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Jailbreak protection |
Implement jailbreak risk detection to safeguard your language model deployments against prompt injection attacks |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Quota exhaustion |
Use security controls like throttling, service isolation and gateway pattern to prevent attacks that might exhaust model usage quotas |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Metaprompting |
Follow Metaprompting guardrails for responsible AI |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Evaluation |
Evaluate the performance/accuracy of the system with a known golden dataset which has the inputs and the correct answers. Leverage capabilities in PromptFlow for Evaluation. |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Content Safety |
Review and implement Azure AI content safety |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
UX best practice |
Review the considerations in HAI toolkit guidance and apply those interaction practices for the solution |
Azure OpenAI |
π‘ Medium |
link |
Governance and Security |
Jail break Safety |
Implement Prompt shields and groundedness detection using Content Safety |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Governance |
Adhere to Azure OpenAI or other LLMs terms of use, policies and guidance and allowed use cases |
Azure OpenAI |
π΄ High |
link |
Governance and Security |
Content Safety |
Tune content filters to minimize false positives from overly aggressive filters. |
Azure OpenAI |
π‘ Medium |
link |