Microsoft SC200 Certification - Microsoft Copilot - Part 8
- brencronin
- 6 days ago
- 20 min read
Microsoft Copilot Overview
Microsoft has named its Artificial intelligence (AI) product Copilot. Microsoft currently offers the following AI products. Microsoft Security Copilot is the AI product that is covered in the SC-200 exam.
Copilot for Microsoft 365: This version is designed for businesses and integrates AI into Microsoft 365 apps like Word, Excel, PowerPoint, Outlook, and OneNote.
Copilot for Sales: This version helps sales teams maximize effectiveness and close more deals.
Copilot for Service: This version improves service experiences and boosts agent productivity.
Copilot Studio: This allows organizations to create custom AI experiences for employees and customers.
Copilot in Azure: This version is designed for developers and cloud infrastructure.
GitHub Copilot: This version is designed for developers to help with code generation and other tasks.
Microsoft 365 Copilot for Finance: This version helps optimize financial processes.
Microsoft Security Copilot: This version is designed to enhance security.
This guide will focus on Microsoft security Copilot
Overview of Microsoft Security Copilot
Microsoft Security Copilot is an AI-powered assistant built to enhance and streamline security operations. Using natural language prompts, analysts can request investigations, threat analysis, or recommended security actions, and Copilot responds with AI-generated insights tailored to the Microsoft security stack. The product’s scope is intentionally broad, it sits on top of multiple Microsoft security solutions, and its impact depends heavily on how effectively it can translate those capabilities into actionable outcomes during real-world operations.
How Security Copilot User Interaction Works
At its core, Microsoft Security Copilot is driven by free-text, natural language interaction—similar to other AI-powered systems. Users engage with Copilot by submitting prompts, which are simply questions or tasks expressed in natural language. Effective use of Copilot depends heavily on prompt engineering: how clearly and precisely prompts are written.
Prompts can be organized into promptbooks, which are curated collections of related prompts for specific use cases (e.g., phishing investigation, malware analysis, or alert triage). Security teams can build their own promptbooks or leverage out-of-the-box (OOTB) promptbooks developed by Microsoft. Copilot also provides a Promptbook Library, where users can browse, duplicate, and customize existing promptbooks.
Another key concept is sessions, an interactive dialogue where a sequence of prompts and responses are linked together, preserving context across the investigation.
The component labeled “Security Copilot” functions primarily as an orchestrator that bridges the Microsoft environment with the OpenAI LLM operating within Microsoft’s secure ecosystem. In Microsoft Security Copilot, Copilots function primarily as orchestrators rather than traditional AI models using embeddings or vector databases. Instead of performing semantic searches or querying internal data stores, Copilots leverage plugins to call the APIs of their respective Microsoft or third-party services.
Unlike traditional Retrieval-Augmented Generation (RAG) architectures that rely on vector database embeddings, Security Copilot accesses environmental data through native security tool APIs exposed via the Microsoft Graph API. Data retrieval, enrichment, and contextual grounding are performed through plugins, which serve as controlled connectors between the LLM and the environment. The orchestrator uses these plugins to process the LLM’s response and align it with real-time environmental data, ensuring outputs are relevant and actionable. This design eliminates the need for Copilots to handle direct database provisioning or data security controls, as those responsibilities remain within the connected service. Supporting them are agents, which act as specialized workers that execute specific operational or analytical tasks as directed by the Copilot.
Within this architecture, agents represent AI-assisted workflows, each agent defines a specific task, identifies the required tools, and executes the workflow accordingly. In essence, an agent is a plugin paired with an execution definition.
However, it’s important to note that complex agents often exhibit inconsistent behavior, as LLM outputs are inherently non-deterministic. This lack of repeatability can introduce instability in automated workflows, making reliability a key design consideration for production use.

The Flow of Interaction
1.User Inputs a Prompt
Prompts can originate directly from Microsoft security products like Defender or Sentinel, or be typed into Copilot’s interface.
Example: “Analyze this Defender alert and summarize potential lateral movement.”
2.Copilot accesses plugins for pre-processing
3.CoPilot sends modified prompts to LLM.
4.CoPilot received LLM response
5.Post Processing & Refinement (Grounding)
Copilot uses grounding to refine prompts, align them with organizational context, and enhance accuracy.
It retrieves data via plugins, APIs, and connected Microsoft security products before sending the enriched request to the AI model.
6.CoPilot sends the response, and app command back to the security product which includes user Review & Assessment
The final response is delivered to the analyst.
Users can review, refine, or chain additional prompts, enabling Users can review, refine, or chain additional prompts, enabling iterative analysis in the same session.
Security Copilot vs. Security Copilot “Agents”
A critical aspect of understanding Copilot is recognizing that it is not a standalone product, but an orchestration layer across Microsoft’s broader security ecosystem, particularly Defender XDR. Defender XDR itself is a fusion of multiple Microsoft solutions that span endpoints, identity, cloud, and email. During incident response, it’s common for several of these products to be in play at once, Copilot’s value lies in its ability to unify those signals, contextualize them, and assist with decision-making or automated response.
The interaction model raises key architectural questions:
Does Security Copilot integrate natively behind the scenes with each product’s APIs?
Does it rely on product-specific agents or connectors to gather and act on data?
Or does it use a hybrid approach, leveraging both centralized Copilot intelligence and product-specific copilots embedded in individual services?
In practice, Microsoft has been pursuing both paths: Security Copilot provides a central analyst-facing experience, while also embedding Copilot capabilities into tools like Microsoft Sentinel, Defender for Endpoint, or Intune to provide contextual, product-specific guidance.
Key Microsoft Security Products Copilot Interacts With
Below is a high-level summary of major Microsoft security products that Copilot draws from and acts upon during analysis:
Microsoft Defender for Endpoint (MDE): Endpoint detection and response (EDR) platform for Windows, Linux, macOS, Android, and iOS.
Microsoft Defender for Cloud Apps (formerly MCAS): Cloud Access Security Broker (CASB) solution for SaaS discovery, governance, and inline protections.
Microsoft Defender for Office 365 (MDO): Email and collaboration security for phishing, spam, and business email compromise detection.
Microsoft Defender for Cloud: Posture management and threat protection for Azure, hybrid, and multicloud workloads.
Microsoft Purview: Governance, data loss prevention (DLP), insider risk management (IRM), and eDiscovery, and communications compliance.
Microsoft Defender for Identity (MDI): Detection of Active Directory misconfigurations and AD-based attack techniques (e.g., Kerberoasting, Pass-the-Hash).
Microsoft Intune: Endpoint management (MDM/MAM) for configuration compliance and conditional access enforcement.
Microsoft Entra: Microsoft’s identity platform, including Azure AD, risk-based conditional access, and identity protection.
Microsoft Defender Vulnerability Management (MDVM): Vulnerability assessment and prioritization.
Microsoft Defender Threat Intelligence (MDTI): Microsoft’s threat intel platform integrated with Recorded Future and MSTIC insights.
Microsoft Sentinel: SIEM + SOAR platform for centralized log ingestion, detection, and automation (via Logic Apps).
Microsoft Defender XDR: The unifying solution that stitches together telemetry and response actions across endpoints, identity, email, apps, and cloud.
Microsoft Security Copilot must interface with these tools through APIs and connectors to ingest telemetry, perform AI-driven analysis, enrich and correlate data, and trigger response actions across the Microsoft security stack.
Microsoft security Copilot agents
Microsoft recently introduced support for dedicated Security Copilot agents, extending the platform’s capabilities with specialized, task-focused AI assistants. To frame this properly, it helps to understand the concept of agentic AI. Agentic AI refers to systems that can autonomously pursue complex goals, plan and adapt in real time, make decisions, and execute multi-step workflows, all with minimal human supervision. In practice, these agents function almost like digital team members embedded within the security stack.
In the context of Microsoft Security Copilot, agents are tightly coupled to specific Microsoft security products, enabling automation and intelligence across defined domains. Each agent not only assists with analysis and decision-making but also directly interfaces with product telemetry and controls to execute outcomes. Current examples include:
Conditional Access Optimization Agent in Microsoft Entra – analyzes identity risk signals to recommend or enforce conditional access policies.
Phishing Triage Agent in Microsoft Defender for Office 365 – streamlines detection and investigation of phishing attempts.
Alert Triage Agent in Microsoft Purview – automates prioritization of data security alerts.
Vulnerability Remediation Agent in Microsoft Intune – automates remediation workflows for endpoint vulnerabilities.
Threat Intelligence Briefing Agent in Microsoft Defender Threat Intelligence (MDTI) – delivers contextualized threat briefings and intelligence-driven recommendations.
Partner-Developed Agents – extend Copilot with custom capabilities from the Microsoft partner ecosystem.

Microsoft Security Copilot Agents – Current Availability
According to recent Microsoft documentation (Microsoft Learn – Security Copilot Agents
), Security Copilot now includes several dedicated agents, each aligned to specific Microsoft security products. Notably, two additional agents have been introduced: the Access Review Agent and the Threat Intelligence Briefing Agent (Standalone Experience).
The currently available agents are:
Conditional Access Optimization Agent – Microsoft Entra (Embedded Experience): Monitors new users and applications not yet covered by existing conditional access policies, identifies potential gaps, and recommends corrective actions. Identity teams can apply these fixes directly with a single click, streamlining policy governance.
Phishing Triage Agent – Microsoft Defender for Office 365: Automates the triage and classification of user-submitted phishing incidents. By reducing manual review effort, this agent accelerates response times and improves phishing defense efficiency.
Security Copilot Agents in Microsoft Purview (Preview): Supports Data Loss Prevention (DLP) and Insider Risk Management (IRM) by managing an agent-driven alert queue. Alerts for the highest-risk activities are prioritized, analyzed for content and intent, and accompanied by detailed logic explaining categorization decisions.
Vulnerability Remediation Agent – Microsoft Intune: Identifies and ranks critical vulnerabilities on managed devices, assesses potential impact, and provides step-by-step remediation guidance leveraging Intune’s patching and compliance capabilities.
Access Review Agent: Assists reviewers during access governance campaigns by delivering contextual insights and recommendations. Integrated into Microsoft Teams, it simplifies and accelerates access review workflows while improving decision accuracy.
Threat Intelligence Briefing Agent – Security Copilot (Standalone Experience): Curates relevant, contextualized, and timely threat intelligence tailored to an organization’s specific attributes and threat exposure, enabling security leaders to stay ahead of emerging risks.
Copilot for Security partner agents
For Microsoft partner agents see: https://learn.microsoft.com/en-us/copilot/security/agents-other
Copilot for security - Building custom agents
For developing your own custom agents see:
Custom agents leverage AI-driven automation to streamline and enhance operational workflows by integrating several core components:
Tools (Skills): Define the specific functions and actions the agent can perform.
Triggers: Identify the conditions or events that initiate agent activity.
Orchestrators: Manage the decision logic and execution flow of tasks.
Instructions: Provide system-level directives that guide agent behavior.
Knowledge: Embed domain-specific intelligence and operational context.
Feedback: Capture and store results to refine future responses.
By combining these components, custom agents integrate seamlessly into security operations—enabling proactive, adaptive, and workflow-driven automation. Agents can respond dynamically to events or scheduled tasks, executing context-aware actions under the guidance of Large Language Models (LLMs) to improve precision and operational efficiency.
Security CoPilots
On the other side of agents are sub-Security Copilots built for specific Microsoft security products. A key distinction to understand is the difference between embedded and standalone experiences.
Embedded Copilots are integrated directly into the security product, typically presenting insights through a familiar LLM-driven Q&A interface within that product’s console.
Standalone Copilots operate through a separate Copilot dashboard or endpoint, where users interact with the AI outside of the individual Microsoft security tool it supports. The standalone is accessed at: https://securitycopilot.microsoft.com/
Below is an overview of product-specific Microsoft Security Copilots that fall under the broader Microsoft Copilot for Security umbrella.
Defender XDR Security Copilot
Embedded in the Defender XDR portal, Copilot helps analysts accelerate incident response by summarizing incidents, guiding investigations, analyzing scripts/files/registry keys, generating KQL queries, contextualizing device and identity activity, and producing incident reports.
Microsoft Entra Security Copilot
Acts as an intelligent assistant for identity security. Copilot enhances access control by providing contextual insights into user identity, risk posture, and policy decisions, helping organizations ensure only the right people access sensitive resources.
Microsoft Purview Security Copilot
Copilot enriches data security operations by summarizing DLP alerts, highlighting policy violations, and correlating with Insider Risk Management signals. This gives analysts a streamlined entry point for deeper investigations into data misuse or exfiltration.
Defender for Cloud Security Copilot
Within Defender for Cloud, Copilot supports cloud workload protection by helping teams prioritize security recommendations. Analysts can prompt Copilot to surface risks tied to publicly exposed or business-critical resources, focusing attention on the most impactful issues.
Microsoft Intune Security Copilot
Copilot augments endpoint and mobile device management by summarizing Intune policies and surfacing their effects on users and devices. It also assists with device troubleshooting through features like “Explore device,” “Compare devices,” and “Analyze error codes.”
Microsoft CoPilot Plugins
Plugins are extensions that enhance Security Copilot’s ability to connect with and leverage both Microsoft and third-party security services. By integrating plugins, Copilot can analyze data from additional sources and deliver a more complete view of an organization’s security posture. A plugin provides specialized AI-driven functionality that extends the core capabilities of Security Copilot. This design makes Security Copilot not just a product, but also a platform. Unlike static solutions that are limited to predefined features, Copilot can be expanded and customized through plugins to meet diverse organizational needs, including custom-built integrations.
Many plugins are tied to specific Microsoft security products. For example, there are plugins for Microsoft Defender XDR, Intune, Entra, and Purview, as well as integrations with third-party solutions like ServiceNow. These allow Copilot to interact seamlessly with different parts of the security ecosystem. Depending on the solutions licensed and assigned within your role or organization, you may have access to some or all of the following plugins:
Microsoft Defender XDR Plugin
Natural Language to KQL Plugin
Microsoft Defender External Attack Surface Management Plugin
Microsoft Defender Threat Intelligence (MDTI) Plugin
Microsoft Purview Plugin
Microsoft Entra Plugin
Microsoft Intune Plugin
Azure AI Search (Preview) Plugin
Azure Firewall Plugin
Azure Web Application Firewall (Preview) Plugin
Microsoft Sentinel Plugin
Security Copilot Authentication and Access Control
Security Copilot interacts with these services using on-behalf-of (OBO) authentication, a method that allows Copilot to act with the same permissions you already have in Microsoft Entra ID (formerly Azure AD). This means Copilot can access and analyze security-related data through active plugins without requiring you to log into each service separately. Security Copilot roles determine who can access the platform and what administrative tasks they can perform (e.g., configuring settings, assigning permissions, enabling plugins).
Microsoft Entra and Azure RBAC roles govern which plugins you can actually use and what data Copilot can access through them.
Least privilege is critical: Copilot inherits your access rights. If you lack the necessary permissions, a plugin will not function; if you have broad permissions, Copilot will have the same scope of access.
Microsoft Defender XDR Plugins
Microsoft Documentation – Security Copilot in Microsoft Defender XDR: https://learn.microsoft.com/en-us/defender-xdr/security-copilot-in-microsoft-365-defender?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
Microsoft Defender XDR integrates directly with Security Copilot through two distinct plugins:
Microsoft Defender XDR Plugin – Focused on incident analysis and investigation.
Natural Language to KQL for Microsoft Defender XDR Plugin – Converts natural language prompts into advanced KQL queries for hunting across Microsoft Defender XDR data.
Capabilities of the Microsoft Defender XDR Plugin
Security Copilot extends incident response workflows in Defender XDR by automating and simplifying common analyst tasks, including:
Summarize incidents – Aggregates related alerts, notes, and telemetry into a concise, contextual summary.
Guided responses – Provides step-by-step investigation and remediation guidance based on incident details.
Analyze scripts, code, and registry keys – Breaks down complex artifacts into human-readable explanations.
Analyze files – Inspects files for malicious behaviors, leveraging metadata such as API calls, certificates, and embedded strings.
Summarize device information – Provides a high-level overview of device posture, anomalies, and suspicious activity.
Summarize user/identity context – Surfaces security concerns and anomalies related to a specific identity.
Generate incident reports – Documents findings, response actions, and attribution in a structured report.
Capabilities of the Natural Language to KQL Plugin
This plugin translates analyst intent into optimized KQL queries, enabling proactive threat hunting within Defender XDR. Analysts can:
Generate advanced queries – Automatically build complex KQL queries from plain-language prompts.
List incidents and alerts – Pull lists of current or historical incidents, filterable by entity, time, or severity.
Query device states – Retrieve device insights, vulnerabilities, and indicators of compromise.
Correlate signals across entities – Hunt across endpoints, identities, and cloud workloads using KQL-driven correlation.
Microsoft Defender External Attack Surface Plugin
The Defender External Attack Surface Management (EASM) plugin for Security Copilot helps organizations discover, monitor, and analyze their internet-facing assets. It enables analysts to quickly assess risks, identify exposures, and prioritize remediation by surfacing actionable insights through natural language queries. Key Capabilities
Vulnerability Impact Assessment
Get assets impacted by CVE: Identify affected assets, their details, and when exposure was first observed.
Get assets impacted by CVSS: Correlate CVSS scores with impacted assets to prioritize remediation efforts.
Attack Surface Insights
Get attack surface insights: View high-, medium-, or low-priority insights related to exposed assets.
Get attack surface summary: Generate an overview of assets within a given attack surface.
Asset Discovery & Validation
Get EASM assets: Retrieve assets through natural language queries.
Get expired domains: Identify expired domains and their associated details.
Get expired SSL certificates: Detect expired SSL certificates, their metadata, and linked assets.
Get SSL certificates by SHA-1: Query SSL certificates directly using SHA-1 identifiers.
Platform Knowledge
Get Microsoft Defender EASM general information: Access reference data, features, and configuration details for Microsoft Defender EASM.
Microsoft Defender Threat Intelligence Plugin
Learn more from Microsoft documentation: https://learn.microsoft.com/en-us/defender/threat-intelligence/security-copilot-and-defender-threat-intelligence?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
The Microsoft Defender Threat Intelligence (MDTI) plugin integrates rich, continuously updated threat intelligence into Security Copilot. By combining Copilot’s reasoning capabilities with MDTI’s data, analysts can accelerate incident investigations, strengthen threat detection, and improve hunting workflows.
For example, an analyst can simply ask Copilot for an overview of the latest threats relevant to their organization, and the plugin will pull together adversary tactics, techniques, and intelligence profiles into a clear, actionable summary.
Vulnerability Intelligence
Get CVE details by IDs: Retrieve vulnerability details and remediation guidance for specific CVEs.
Get CVE details by keywords: Generate a list of recent or keyword-matched CVEs with concise summaries.
Get CVE mitigation: Access recommended mitigation or remediation steps for a given CVE.
DNS & Infrastructure Lookups
Get DNS resolutions by hostname: Return DNS resolution history for a specified hostname.
Get DNS resolutions by IP address: Map an IP address to its DNS resolution records.
Threat Intelligence & IOCs
Get intelligence profile IOCs: Retrieve indicators of compromise associated with specific intelligence profiles.
Get IOC reputation: Assess the reputation and risk level of one or more IOCs.
Look up threat intelligence: Search across intelligence profiles, reports, and articles.
Incident Contextualization
GetRelatedIncidentsByQuery: Identify incidents and alerts related to specific threat reports within a chosen timeframe.
Microsoft Purview Plugin
Learn more from Microsoft documentation: https://learn.microsoft.com/en-us/purview/copilot-in-purview-overview?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
The Microsoft Purview plugin integrates data security, compliance, and insider risk intelligence into Security Copilot, giving analysts the ability to quickly surface sensitive data risks and user behaviors in the context of security investigations. By leveraging Purview’s governance and compliance insights, Security Copilot extends beyond traditional threat detection to provide visibility into data exfiltration risks, insider threats, and policy violations.
Knowledge & Documentation
Ask Microsoft Purview documentation: Search Purview solution documentation directly through Copilot for quick answers.
Data Risk Insights
Get Microsoft Purview data risk summary: Summarize risks tied to sensitive data linked to incidents or DLP alerts.
Zoom into Microsoft Purview data risk: Drill into attributes such as sensitivity labels or risky activities associated with specific data.
User Risk Insights
Get Microsoft Purview user risk summary: Provide a consolidated overview of user risk based on Purview Insider Risk Management signals.
Zoom into Microsoft Purview user risk: Analyze user activities in detail, including behaviors that may indicate insider threats.
Alert Management
Summarize Microsoft Purview alerts: Aggregate alerts from Purview Data Loss Prevention (DLP) and Insider Risk Management into concise summaries.
Triage Microsoft Purview alerts: Retrieve and filter recent alerts by severity or status to accelerate investigation workflows.
Microsoft Entra plugin
Explore a summary of a user’s active risk with Entra ID Protection: View a detailed summary of a Microsoft Entra ID user’s risk Explore diagnostic log collection in Microsoft Entra: View settings for diagnostic log collection and streaming of activity logs in Microsoft Entra ID Explore Microsoft Entra audit log details: View changes to applications, groups, users, and licenses in Microsoft Entra ID Find group details in Microsoft Entra: View Microsoft Entra ID group ownership and membership details Find sign-in logs in Microsoft Entra: View Microsoft Entra ID sign-in log details, including policy evaluation results Find user details in Microsoft Entra: View contact information, authentication method registration, and account details for users Investigate identity risks with Entra ID Protection: View details of Microsoft Entra ID users with high, medium, or low risk of compromise
Microsoft Intune Plugin
Learn more from Microsoft documentation: https://learn.microsoft.com/en-us/intune/intune-service/copilot/security-copilot?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
The Microsoft Intune plugin integrates device and application management insights into Security Copilot, enabling analysts and IT administrators to quickly assess endpoint configurations, policy enforcement, and compliance risks during investigations. With this plugin, Security Copilot can surface detailed configuration data, explain policy settings, and provide context on user or device assignments, all through natural language queries.
Device Configuration & Policies
Analyze device configuration error code: Retrieve detailed information about device configuration errors for troubleshooting.
Describe device configuration policy: Summarize and assess the impact of Intune policies on managed devices.
Learn about policy setting: Get best practices and detailed explanations for specific configuration policy settings.
Learn where policies use a specified setting: Identify existing device policies that contain a given configuration setting.
Assignments & Scope
Get assignment scope for app or policy: Summarize the number of users and devices assigned to an app or policy.
Understand why a device has an app or policy: Explain group memberships and policy assignments that affect a specific device.
Device Insights
Get Microsoft Intune devices: Summarize managed devices, including key details for a given user.
Get device configuration differences: Compare the configurations of two devices to highlight similarities and differences.
Get device group memberships: Summarize group memberships of a managed device.
Get discovered or managed apps: List Intune-managed apps or installed software on a device.
Get policies assigned to a device: Provide details about policies applied to a specific device, including both device and app configurations.
Azure AI Search Plugin (Preview)
Microsoft Documentation: https://learn.microsoft.com/en-us/copilot/security/plugin-azure-ai-search
The Azure AI Search plugin integrates scalable, secure search and knowledge retrieval into Security Copilot. This allows analysts to surface insights from proprietary organizational content directly within Copilot conversations. By connecting to an Azure AI Search index, Copilot can generate contextualized, relevant, and specific responses based on your own data.
Indexing – Load and structure organizational content into the search service for fast, secure retrieval.
Querying – Search indexed content at scale and extract insights with natural language prompts inside Security Copilot.
This plugin is especially valuable for document search, data exploration, and chat-style Copilot apps where context from internal content is critical to incident investigation or decision-making.
Azure Firewall Plugin
Microsoft Documentation: https://learn.microsoft.com/en-us/azure/firewall/firewall-copilot?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
The Azure Firewall plugin enhances Security Copilot by enabling natural language investigation of firewall telemetry. Analysts can query malicious traffic detected by the Intrusion Detection and Prevention System (IDPS) across their entire firewall fleet and receive clear, contextual answers in real time.
Investigate malicious traffic patterns quickly.
Correlate IDPS alerts across multiple firewalls.
Support rapid incident response with natural language queries.
Azure Web Application Firewall Plugin (Preview)
Microsoft Documentation: https://learn.microsoft.com/en-us/azure/web-application-firewall/waf-copilot?bc=%2Fsecurity-copilot%2Fbreadcrumb%2Ftoc.json&toc=%2Fsecurity-copilot%2Ftoc.json
The Azure Web Application Firewall (WAF) plugin allows Security Copilot to perform deep investigations of WAF events and surface attack insights in minutes. By leveraging natural language queries, analysts can quickly analyze WAF logs, identify trends, and uncover potential attack vectors at machine speed.
Summarize Azure WAF events and related attack vectors.
Retrieve frequently triggered WAF rules.
Identify top offending IP addresses in your environment.
Gain real-time visibility into the application-layer threat landscape.
Industry Custom Plugins
Service Now
Plugin to access Service Now knowledge articles.
Plugin to integrate with Service Now Security agent
Creating Custom Plugins
Building a custom plugin for Microsoft Security Copilot is a straightforward process, particularly when leveraging KQL-based plugins. A plugin begins with a simple YAML file that defines the KQL query or skill. YAML is a human-readable configuration format widely used in cloud and security automation.
Examples of Custom Plugins
CfSAllinOne – Combines multiple plugins into a single all-in-one workflow to capture and respond with Security Copilot activity. Updated as new monitoring capabilities become available.
IP-API Integration – Uses the free IP-API service to enrich IP addresses with attributes such as geolocation, ISP, ASN, proxy status, and more.
WHOIS Services – Leverages WHOIS data for domain attribution and investigation, a critical capability for threat intelligence and incident response.
Types of Custom Plugins
Security Copilot supports several categories of plugins:
API-based plugins – Extend Copilot with external data sources or enrichment services.
GPT-based plugins – Add natural language or specialized AI-driven functionality.
KQL-based plugins – Define reusable KQL queries as Copilot skills.
Process to Create a KQL-Based Plugin
Creating a KQL-based plugin can be distilled into three simple steps:
Identify the query – Choose the KQL query you want to operationalize as a plugin skill.
Build the YAML file – Add the query into a YAML template that defines the plugin.
Upload to Copilot – Deploy the YAML file to Security Copilot, making the query available as a callable skill.
This modular approach allows security teams to quickly extend Security Copilot with custom queries, data sources, and integrations tailored to their organization’s needs.
Investigations Automations and AI
A useful way to think about automation and AI in investigations is to break down the investigative lifecycle. Each stage may be manual, automated, or a hybrid, and mature organizations typically have stronger capabilities at every step. As Brian Carrier (Cyber Triage) outlined, investigations generally follow these key phases. The challenge, and opportunity, lies in determining how to best automate and integrate AI within and across each stage.
1. Planning
Investigations begin with defining key questions and mapping out what data is needed, where it resides, and how it should be collected. Traditionally this has been a manual process, but AI can accelerate hypothesis generation, recommend investigative paths, and dynamically adjust the plan as new evidence or limitations surface.
2. Data Collection
Data is gathered from relevant sources, whether by extracting artifacts from a system or querying data silos such as EDR, SIEM, or cloud logs. Automation ensures consistency and scale, while AI can prioritize which data sources are most likely to yield answers.
3. Parsing
Collected data must be converted into a usable format (artifacts). Historically this relies on parsers built with static rules. AI-driven parsing could adapt dynamically to new formats, evolving log structures, or obfuscated data.
4. Normalization
Normalization maps raw data to its semantic meaning (e.g., mapping a binary execution event to a “process”) and removes duplicates. Few tools handle this well today, but AI has strong potential for contextual mapping and disambiguation across diverse datasets.
5. Enrichment
Artifacts are enriched with external intelligence or contextual data such as hash lookups, object recognition, vulnerability references, or language translation. AI can streamline enrichment at scale by integrating multiple intelligence feeds and adding contextual reasoning.
6. Scoring
Artifacts are assessed for their relevance to the case, often based on investigative goals. AI models can score artifacts dynamically, factoring in anomaly detection, historical baselines, and adversary TTP patterns to reduce noise and prioritize high-value leads.
7. Display
Findings must be presented in a way that enables human reasoning, beyond unsorted tables. Clustering, graph visualizations, and AI-assisted grouping can surface relationships and patterns that may otherwise remain hidden.
8. Search
Investigators need to search across collected evidence. While many tools provide basic keyword or filter searches, AI can enable semantic search, natural language queries, and context-aware filtering, bridging the gap between human intent and machine-readable data.
9. Reporting
The investigation concludes with documentation of key findings. Generative AI can automate first-draft report creation, summarizing data, producing timelines, and tailoring content for both technical and executive audiences.
Microsoft security Copilot Installation
Security Compute Unit (SCU) & Billing
Provisioned SCUs
The Security Compute Unit (SCU) represents the computing capacity required to run Security Copilot workloads. Billing is based on hourly SCU blocks, not per-minute increments, with a minimum charge of one SCU per activation.
For example, if you activate an SCU, run a Copilot task for 30 minutes, turn it off, and then start another task 20 minutes later, this will count as two SCUs consumed. Efficient job management directly impacts cost, as SCUs are consumed only by Copilot-related tasks.
Overage SCUs
Overage SCUs occur when your provisioned capacity is exceeded. You can configure overage as either unlimited or capped at a defined limit. Usage is tracked per unit consumed, with calculations precise to one decimal place.
Provisioned Capacity Model
Security Copilot operates on a provisioned capacity model, where costs are determined by the number of SCUs allocated and any additional overage. Each Copilot task consumes a specific amount of SCUs, and more complex or frequent tasks will increase consumption.
To manage costs effectively:
Monitor SCU usage via the Security Copilot portal dashboard.
SCUs can be purchased during initial setup or later, through the Security Copilot portal or Azure.
Microsoft recommends starting with 3 SCUs per hour to explore Security Copilot (Microsoft).
Scaling and Capacity
As you integrate Security Copilot into daily operations, you’ll gain insight into your actual SCU requirements. SCUs can be scaled up or down to match processing needs. In this context, capacity refers to the allocated Azure resource containing SCUs, representing the computing power available to manage and analyze security tasks efficiently.
Sessions and Context Retention
While using Security Copilot, the system captures the sequence of prompts and responses within a session. A session is a continuous interaction period during which Copilot retains context, allowing it to provide relevant, logically connected responses to subsequent prompts. This ensures smoother workflows and more accurate outputs as each response builds on the previous conversation.
[Monitoring SCUs image]
Research
Busting myths on Microsoft Security Copilot
NL2KQL??????
n8n , lanchain workflows
blink ops automations
Other copilots:
NL2KQL:
Microsoft CoPilot Scenario library:
Security Concerns with Microsoft Copilot
Data Privacy Compromises:
Validity: Concerns about data privacy are common with AI tools but this is another completely ironical myth for a security product.
Reasoning: One important thing to know when using Microsoft products and solutions is that Microsoft provides you with contractual commitments on giving you control over your own data! Microsoft takes data security so seriously that even if a law enforcement agency or the government requests your data, you will be notified and provided with a copy of the request! And hence Microsoft defends your data through clearly defined and well-established response policies and processes like:
Microsoft uses and enables the use of industry-standard encrypted transport protocols, such as Transport Layer Security (TLS) and Internet Protocol Security (IPsec) for any customer data in transit.
The Microsoft Cloud employs a wide range of encryption capabilities up to AES-256 for data at rest.
Your control over your data is reinforced by Microsoft compliance with broadly applicable privacy laws, such as GDPR and privacy standards. These include the world’s first international code of practice for cloud privacy, ISO/IEC 27018.
This contextual awareness transforms security operations by considering multiple critical dimensions:
Asset criticality and data sensitivity
Organizational structure and business processes
Temporal patterns and seasonal variations
Recent environmental changes like system updates or policy modifications
Current threat landscape and attack patterns
Regulatory requirements and compliance obligations
References
Microsoft Security Copilot, First Edition:
Microsoft security CoPilot Overview:
Microsoft security CoPilot Agents Overview:
Threat Intelligence Briefing Agent
Plugins:
Plugins overview Microsoft Security Copilot
Understand authentication in Microsoft Security Copilot
Monitor Azure Firewall
SCU pricing:
AI prompts Guide:
Automated incident triage with Security Copilot and Microsoft Sentinel/ Defender XDR
SecurityCopilot-Sentinel-Incident-Investigation
The Growing Challenge of Auditing Agentic AI
Announcing Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview
Updates from Ignite 2025
Agent 365 Control Plane for agents
Comments