Skip to main content
Logo
Overview

Inside Bicep: Understanding Module Snapshots in Registries

February 13, 2026
9 min read

Today I wanted to introduce and go through some of the complexities of leveraging the Azure Container Registries (ACR) as a Bicep module registry and some of the complexities that will occur, especially once we get into nested modules.

A popular approach when leaning heavily into the use of Bicep, is to create a library of reusable modules that can be used across multiple projects. These modules can typically be designed with hardcoded security boundaries, governance or naming conventions in mind. Regardless of the specific motivation, there are many benefits to having pre-approved modules and a structured way of reusing them.

The base module

Let’s get started with a simple module that creates a storage account that we can build on top of. This module will be the first layer of our modules.

storage.bicep
@description('The name of the storage account to create.')
param storageAccountName string
@description('The location where the storage account will be created.')
@allowed([
'westeurope'
'northeurope'
])
param location string
targetScope = 'resourceGroup'
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
name: storageAccountName
location: location
sku: {
name: 'Standard_LRS'
}
properties: {
accessTier: 'Hot'
allowBlobPublicAccess: false
}
kind: 'StorageV2'
}

The consumer module

Now that we have our first module published, we can start building on top of it. We can create a new module that references the first one, like this:

storagecontainer.bicep
targetScope = 'resourceGroup'
@description('The name of the storage account to create.')
param storageAccountName string
@description('The location where the storage account will be created.')
@allowed([
'westeurope'
'northeurope'
])
param location string
@description('The name of the storage container to create.')
param storageContainerName string
module storageAccountModule 'br:myregistry.azurecr.io/resources/storageaccount:latest' = {
name: 'storageAccountDeployment'
params: {
storageAccountName: storageAccountName
location: location
}
}
resource storageContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2022-09-01' = {
name: '${storageAccountName}/default/${storageContainerName}'
properties: {
publicAccess: 'None'
}
}
Remark (Versioning)

Generally for a production grade environment, you would not want to use the latest tag for your modules in the registry, but rather have a proper versioning strategy in place to ensure stability and predictability of your deployments. The latest tag is used here just for simplicity and demonstration purposes, but the same principles can be applied if you are using a strategy for an example based on semantic versioning that might allow for hotfix to be automatically included in the consumer module, but still require manual updates for major and minor versions.

This setup would enable any developer that is building in Bicep to just use the storagecontainer.bicep module and not have to worry about what kind of settings are required to create the storage account.

How Bicep Compiles External References

Before getting into updating the modules and working with the registry, there are a couple of crucial things to understand about how doing any Bicep related operations work.

When running any Bicep related command, whether it is a build, publish or deployment command, the Bicep CLI will do a couple of things under the hood.

First one being it will resolve for any externally references modules, which in our case is the storage.bicep module that is being referenced in the storagecontainer.bicep module.

Remark (VSCode extensions)

If developing in VS Code with the Azure and Bicep extensions installed and being logged into Azure with an account that has access to the registry will enable you to run the build commands locally as it will be able to download the referenced modules and cache them locally to enable intellisense and local builds.

The second thing that happens and the most important to take note of, is that the modules are transpiled into ARM templates, which is the underlying language that Azure understands and that Bicep is an abstraction layer on top of.

So while we have our module as Bicep code, and our any code that we might reference it in is also written in Bicep, the actual code that is stored in the registry is ARM template code.

The Bicep compiler processes this in the background, translating Bicep code into ARM template JSON.

Tip (Checking the content of the registry)

Since the code is stored in the registry as an OCI artifact, you can use a tool such as ORAS to pull the content of the registry and explore the actual ARM template code that is stored there.

It’s important to note that ARM code does not have capabilities for referencing external modules. So this means that when you are publishing a Bicep module to the registry it will have essentially merged a snapshot of the referenced modules into the ARM template code that is being stored in the registry.

Why does this matter?

Now that we have covered some of the inner workings of Bicep and how it handles modules and the registry, we can start accounting for it when designing our modules and how we structure our repositories.

The key take away from the previous section is that it’s a snapshot being taken at the moment of publishing.

So if we use our example files from before, if we publish the storagecontainer.bicep module to the registry, it will take a snapshot of the storage.bicep module and merge it into the ARM template code that is being stored in the registry.

If we then make changes to the storage.bicep module and publish it again, the changes will not be reflected in the storagecontainer.bicep module that is stored in the registry.

To get those changes reflected in the storagecontainer.bicep module, we would need to publish it again.

So, how to design for this?

From a technical perspective, there is a multitude of different methods and approaches to design for this, but let’s start by looking at an high level picture of what needs to be done

flowchart LR %%{init: {"themeVariables": {"fontSize": "24px"}}}%% A(("Publish Base Module")) ~~~ H1["H1"] A --> B(("Detect change")) B --> C(("Publish Consumer Module")) B ~~~ H2["H2"] C ~~~ H3["H3"] C --> D(("Done")) H1:::hidden H2:::hidden H3:::hidden classDef hidden display: none classDef big font-size: 20px, padding: 15px class A,B,C,D big

The exact details on how this would be implemented would have to come down to the specific requirements and constraints of your environment as there can be a difference in how Identity, Networking and security are handled. With that being said generally, I have implemented solutions to this problem in 2 different ways

Strategy 1: Event-Driven Updates (Webhooks)

This solution focuses on the interactions between detecting the change and publishing the dependent module. The idea is to leverage the webhooks functionality of the ACR to trigger a CI/CD pipeline that would handle the publishing of the dependent module once a change is detected in the registry.

In a GitHub environment this requires a dedicated workflow that will be triggered by the webhook and will handle the publishing of the dependent module. To make this function and be able to trigger the workflow itself we have a couple of hurdles to solve.

  • We need to be able to trigger on a push to the registry preferably with filtering capabilities to only trigger on changes to the relevant module
  • We need to be able to handle authentication to the Github API securely

The first point can be solved in 2 ways. Either using the direct webhook functionality of the ACR, or by using an Event Grid Event Subscription on the registry configured to check for push events.

The second point does require a bit more work, but generally I’ve found a Logic App that can read a GitHub token from a Key Vault and trigger the workflow using the GitHub API to be a good solution for this.

A similar approach can be taken in Azure DevOps, where the incoming webhook is configured under the service connections, where you would ideally set a secret for the webhook to be accepted. Replace the GitHub token with this secret from the GitHub example to achieve the same result.

Strategy 2: Pipeline Chaining

Based on the assumptions that modules would only ever be published and updated via a CI/CD pipeline, we can put more of the responsibility on the pipeline/s itself to handle the detection of changes and the publishing of dependent modules.

Since we have a workflow or a pipeline that is responsible for publishing the modules, we can add one more step to the initial workflow. Which is to trigger the secondary workflow that will be responsible for publishing the dependent module after the initial module has been published.

This approach does require a bit more work in the pipelines itself, but the upside is it’s easier to add additional logic to the process such as checks and approvals before publishing the dependent module.

In an Azure DevOps environment this can be done by defining the trigger on the secondary pipeline as such

trigger: none
# this pipeline will only be triggered by another pipeline
resources:
pipelines:
- pipeline: demopipeline # Name of the pipeline resource.
source: demopipeline-ci # The name of the pipeline referenced by this pipeline resource.
project: FabrikamProject # Required only if the source pipeline is in another project
trigger: true # Run the second pipeline when the first one completes

In a GitHub based environment, this can be achieved by either using the trigger workflow_call or repository_dispatch to trigger the secondary workflow. The first one is a bit easier to set up but does require the workflows to be in the same repository, while the second one can be used to trigger across repositories. Generally I found the repository_dispatch to be a more flexible option as it allows for better separation of concerns between the different modules and their respective pipelines if needed.

To showcase this, I’ve built a simple example of how this would look in a GitHub environment, where we have 2 workflows, one for publishing the initial module and one for publishing the dependent module.

The first workflow will trigger the second one once it has completed the publishing of the initial module, and acts as a signal that the dependent modules might need updating. You can then add logic to this second workflow to check for actual changes before publishing, ensuring you only release updates when necessary.

Tip (Get the Code)

Want to see this in action? I’ve built a complete working example of the pipeline chaining strategy.

Check out the Demo Repository on GitHub to clone the workflows and try it yourself.

Conclusion

By understanding how Bicep snapshots dependencies, we move from being surprised by stale modules to actively managing our infrastructure supply chain. Whether you choose event-driven webhooks or pipeline chaining, the goal remains the same: ensuring that a fix in a base module automatically propagates to every consumer down the line.

I hope these strategies help you build a more robust and automated registry setup.