Seamless CI/CD for Azure Synapse with Azure DevOps

Bringing robust DevOps practices to cloud data platforms is crucial for delivering scalable, reliable solutions. In this post, I’ll walk through how we implemented a streamlined CI/CD pipeline for Azure Synapse Analytics using Azure DevOps. This setup enables our team to safely promote changes from development to production with version control, build automation, deployment approvals, and minimal manual effort.

Overview

We designed a two-stage pipeline approach:

  1. Build Stage – Compiles and publishes a DACPAC artifact from the Synapse SQL project
  2. Deploy Stage – Deploys the DACPAC to the production Synapse SQL pool with preconfigured approval gates

By splitting the stages into separate pipelines, we gain more control over deployment triggers and introduce an approval step before touching production.


🔧 Tools & Stack

  • Azure DevOps: Repos, Pipelines, Variable Groups, Environments
  • Git: Branch-based development and PR workflows
  • SQL Server Data Tools (SSDT): .sqlproj for the Synapse dedicated SQL pool
  • DACPAC: Used as the deployment artifact

🚀 Build Pipeline (build-sql-dacpac.yml)

This pipeline triggers on commits to the main branch. It builds the .sqlproj and outputs a DACPAC artifact.

trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'windows-latest'

variables:
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'

stages:
- stage: Build
  jobs:
  - job: BuildJob
    steps:
    - task: NuGetToolInstaller@1
    - task: NuGetCommand@2
      inputs:
        restoreSolution: '**/*.sln'
    - task: VSBuild@1
      inputs:
        solution: '**/*.sln'
        msbuildArgs: '/p:DeployOnBuild=true /p:TargetProfile=AzureSynapse /p:OutputPath=$(Build.ArtifactStagingDirectory)'
        platform: '$(buildPlatform)'
        configuration: '$(buildConfiguration)'
    - task: CopyFiles@2
      inputs:
        contents: '**/*.dacpac'
        targetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: PublishBuildArtifacts@1
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)'
        ArtifactName: 'sql_dacpac'

🚨 Deploy Pipeline (deploy-prod.yml)

This pipeline consumes the published DACPAC and deploys it to the production SQL pool, but only after manual approval is granted via Azure DevOps Environments.

trigger: none
pr: none

resources:
  pipelines:
    - pipeline: buildPipeline
      source: 'Build & Publish .DACPAC'
      trigger:
        branches:
          include:
            - main

variables:
- group: SqlProdCredentials  # Secure variables (e.g., username, password)

stages:
- stage: DeployToPROD
  displayName: 'Deploy to PROD'
  condition: succeeded()
  jobs:
  - deployment: DeployDacpac
    displayName: 'Deploy DACPAC to PROD SQL'
    environment: 'prod'  # Linked to an approval gate
    strategy:
      runOnce:
        deploy:
          steps:
          - download: buildPipeline
            artifact: sql_dacpac
          - task: SqlAzureDacpacDeployment@1
            inputs:
              azureSubscription: 'Your Azure Service Connection'
              ServerName: 'your-prod-sqlserver.database.windows.net'
              DatabaseName: 'your-prod-database'
              SqlUsername: '$(ProdSqlUsername)'
              SqlPassword: '$(ProdSqlPassword)'
              DacpacFile: '$(Pipeline.Workspace)/buildPipeline/sql_dacpac/YourProject.dacpac'
              AdditionalArguments: '/p:BlockOnPossibleDataLoss=false'

✅ Approvals & Environments

To safeguard production, we created an Azure DevOps Environment named prod and assigned it an approval gate. Any deployment to this environment halts until an authorized team member (e.g., lead engineer or architect) approves it.

This simple control prevents accidental deployments and enforces peer review before changes go live.


🔄 Workflow Summary

  1. Developer merges changes into main
  2. Build & Publish .DACPAC pipeline runs automatically
  3. Deploy to PROD pipeline is triggered
  4. Deployment pauses pending approval
  5. Once approved, the pipeline deploys the DACPAC to the production SQL pool

🔐 Secrets Management

Credentials for the production SQL database are stored in an Azure DevOps variable group, linked to the pipeline. This keeps sensitive information out of source control and ensures secure access during deployments.


🧠 Lessons Learned Along the Way

One unexpected roadblock came from using Visual Studio's Schema Compare. While comparing the DEV environment to the code repository worked perfectly, attempting to compare the code repository to the PROD environment failed with a cryptic error: “An entry with the same key already exists.”

The message gave no indication of which object caused the issue. After exhausting Google and ChatGPT, I opened a Microsoft support ticket. During a troubleshooting call, we suspected the PROD environment, but that didn’t quite add up—it was the same environment used to originally generate the code repository.

On a hunch, I excluded all tables from the schema project and found that the comparison worked. Narrowing it down further, I grouped tables by distribution type—hash vs. replicate vs. round robin—and discovered the problematic table was among the round robin ones. After splitting the group and testing iteratively, I isolated a single table that had two non-clustered indexes on the exact same columns in the exact same order.

This kind of duplication is allowed in Synapse but confuses Schema Compare, resulting in the vague key conflict error. Once I dropped one of the duplicate indexes, the schema comparison completed successfully.


Final Thoughts

This CI/CD process dramatically reduces friction in promoting changes while preserving control and security. Whether you're running a large-scale Synapse platform or a modest dedicated SQL pool, pairing DACPACs with Azure DevOps pipelines is a low-cost, high-impact way to modernize your deployment strategy.

Got questions or want to learn more? Drop a comment or reach out!

💬 Join the Discussion