A pull request merged into the Azure Sentinel GitHub repo on 20 February 2026 added a new Codeless Connector Framework (CCF) template for Azure Storage Blob. It’s marked as a public preview feature, and if you’ve built CCF connectors before, you’ll notice this one works differently to anything that came before it.

A new connector kind

Most CCF connectors use the RestApiPoller kind, where Sentinel polls a REST API on a schedule, retrieves records, and writes them to Log Analytics. This new connector introduces a different kind entirely:

"kind": "StorageAccountBlobContainer"

Instead of polling, it’s event-driven:

  1. You point Sentinel at an Azure Blob Container
  2. On Connect, Sentinel deploys supporting infrastructure into your storage account’s subscription
  3. Event Grid fires a Microsoft.Storage.BlobCreated event when a new blob lands
  4. That event gets queued into a Storage Queue
  5. Sentinel reads from the queue, fetches each blob, and ingests via a DCR into Log Analytics

The moment a blob appears, Sentinel knows about it. No more polling intervals or pagination logic.

CCF already has several specialised connector kinds in production: RestApiPoller, Push, GCP, AmazonWebServicesS3, WebSocket, OCI, PurviewAudit, and others. StorageAccountBlobContainer is new to the family, and it wires Sentinel to Azure’s own eventing infrastructure rather than an external HTTP endpoint or cloud-provider SDK.

What gets deployed when you hit “Connect”

This is the non-obvious bit. Unlike a RestApiPoller connector that just registers a polling job, this one provisions actual Azure infrastructure via a nested ARM deployment running cross-subscription:

Resource Purpose
Storage Queue (storageblob-notification) Receives blob-created event notifications
Storage Queue (storageblob-dlq) Dead-letter queue for failed notifications
Event Grid System Topic Subscribes to Microsoft.Storage.BlobCreated events
Event Grid Subscription Filters events for your specific container and optional folder prefix, routes to the notification queue
Role Assignment: Storage Blob Data Contributor Grants the Sentinel service principal read access to blobs
Role Assignment: Storage Queue Data Contributor (x2) Grants read/delete on both the notification and dead-letter queues

All of this gets torn down or reconfigured on disconnect. The connector also handles the case where an Event Grid System Topic already exists on the storage account (only one per account is allowed). You provide the existing topic name and it reuses it.

The connection parameters at the ARM level are simple:

"request": {
    "QueueUri": "https://<storageAccount>.queue.core.windows.net/storageblob-notification",
    "DlqUri":   "https://<storageAccount>.queue.core.windows.net/storageblob-dlq"
}

Authentication is "type": "ServicePrincipal". During setup, the UI displays the Sentinel connector’s application ID so you can verify the role assignments. You’re not entering credentials manually, the framework uses its own managed identity. You just need subscription-level permissions to create the Event Grid and queue resources, plus contributor rights on the storage account.

Why this matters

Getting blob-stored data into Sentinel used to mean building a custom Function App or LogStash pipeline, wrestling a RestApiPoller connector through the Azure Storage REST API’s pagination, or manually configuring AMA agent data sources. This reduces all of that to a template and a Connect button.

The bigger deal is the list of log sources that routinely land data in blob storage. Activity Logs, Resource Diagnostic Logs, Entra ID Sign-in/Audit Logs can all be exported to blob via Diagnostic Settings, so any Azure service that supports “Archive to a storage account” can now feed Sentinel through this connector. Same goes for AKS audit logs, Azure SQL audit logs, Azure Firewall Flow Logs, NSG Flow Logs, and DDoS Protection logs.

It’s not limited to Azure-native sources either. Any security tool that supports exporting to Azure Blob Storage as an output (F5, Palo Alto, Cisco, and others all have this) becomes a candidate. STIX/TAXII feeds or indicator files delivered as blobs on a schedule work too: connect the container, and every new drop is automatically ingested. Even immutable blob storage used for compliance archiving can be tapped as a read source without breaking immutability.

One thing I think will matter in practice: cross-subscription support. The connector explicitly supports storage accounts in a different subscription to the Sentinel workspace. You supply the storage account’s subscription ID, resource group, and location as connection parameters. In every enterprise I’ve worked with, logs end up centralised in a dedicated storage subscription while Sentinel lives somewhere else, so this is a welcome design choice.

It’s a template, not a finished product

To be clear about what was actually merged: this is a reference template in DataConnectors/Templates/, not a finished solution in Solutions/. The file is annotated with // Modify to your ... comments throughout. Variables like _logAnalyticsTableId1, the stream name, DCR transform, and connector definition ID all need customising for a specific data source.

The DCR transform in the template is deliberately minimal:

source | extend TimeGenerated = now()

Real connectors would extend this to parse blob content into structured columns. The raw event from Event Grid describes the blob metadata, not the blob contents themselves, so the transform needs to handle whatever format your blobs are in: JSON lines, CSV, CEF, and so on. That’s where you’ll spend your time. The eventing plumbing is handled for you, but structuring the actual log data is on you.

Watch out for

Public preview, so rough edges are expected:

  • The ServicePrincipalIDTextBox_test instruction type has _test in the name. I’d bet that gets renamed before GA.
  • One Event Grid System Topic per storage account is an Azure platform limit. The connector handles this, but if you’re connecting multiple blob containers from the same storage account, they’ll share a topic and you’ll need to coordinate.
  • Cross-subscription deployments require the deploying user to have appropriate RBAC in the storage account’s subscription, not just the Sentinel workspace subscription.
  • The DCR transform must parse the blob content. This is where incorrect field mappings or unexpected blob formats will quietly break ingestion, so test with real data early.

The reference template is in the Azure Sentinel GitHub repository at DataConnectors/Templates/Connector_StorageBlob_CCF_template.json. I’d expect a polished Content Hub solution to follow once this is closer to GA.


Based on PR #13668 merged 20 February 2026.