Azure DevOps

Comprehensive guide to Azure DevOps topics for interview preparation.

Detailed Overview

Azure DevOps provides developer services for support teams to plan work, collaborate on code development, and build and deploy applications. It offers a comprehensive suite of tools including Azure Repos, Azure Pipelines, Azure Boards, Azure Test Plans, and Azure Artifacts.

Azure CLI Commands
# Create a new Azure DevOps pipeline
az pipelines create --name MyPipeline --repository MyRepo --branch main

# List all pipelines
az pipelines list

# Run a pipeline
az pipelines run --name MyPipeline

# Create service connection
az devops service-endpoint create \
  --organization https://dev.azure.com/MyOrg \
  --project MyProject \
  --name MyServiceConnection \
  --type azurerm
Bicep Code Snippet
// Note: Azure DevOps resources are typically managed via REST API or CLI
// Bicep is primarily for Azure resources. For DevOps organization structure:

// Example: Deploying infrastructure that DevOps pipelines will use
@description('Resource group for DevOps resources')
param resourceGroupName string = 'rg-devops-${uniqueString(resourceGroup().id)}'

@description('Location for resources')
param location string = resourceGroup().location

// Storage account for pipeline artifacts
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: 'stdevops${uniqueString(resourceGroup().id)}'
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_LRS'
  }
  properties: {
    supportsHttpsTrafficOnly: true
    minimumTlsVersion: 'TLS1_2'
  }
}

// Key Vault for pipeline secrets
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
  name: 'kv-devops-${uniqueString(resourceGroup().id)}'
  location: location
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }
    tenantId: subscription().tenantId
    enableSoftDelete: true
    enablePurgeProtection: true
    accessPolicies: []
  }
}
Senior-Level Q&A

Q: How do you design a CI/CD strategy that balances speed, security, and quality across multiple teams and environments?

A: Implement a tiered approach: fast feedback loops for development (automated tests, linting), gated deployments for staging (integration tests, security scans), and controlled production releases (approvals, canary deployments). Use Infrastructure as Code (IaC) for consistency, implement pipeline templates for standardization, establish clear branching strategies (GitFlow, trunk-based), and create a security-first mindset with secrets management, least-privilege access, and automated compliance checks. Balance automation with human oversight for critical decisions, and establish metrics (deployment frequency, lead time, MTTR) to continuously improve.

Q: What are the key architectural decisions when designing a multi-tenant CI/CD platform that serves hundreds of development teams?

A: Consider isolation strategies (separate agent pools, namespaces, or dedicated infrastructure per tenant), resource quotas and limits to prevent resource exhaustion, centralized vs distributed secret management, pipeline template governance to enforce standards, audit logging and compliance tracking, cost allocation and chargeback mechanisms, self-service capabilities with guardrails, and disaster recovery strategies. Evaluate trade-offs between shared infrastructure (cost efficiency) vs dedicated resources (isolation, performance). Implement feature flags for gradual rollout of platform changes.

Q: How do you handle secrets and configuration management in a cloud-native CI/CD pipeline while maintaining security and auditability?

A: Use a secrets management service (Azure Key Vault, HashiCorp Vault) with rotation capabilities, implement managed identities for service-to-service authentication to eliminate credential storage, use reference-based configuration (Key Vault references) rather than embedding secrets, implement least-privilege access with time-bound permissions, enable comprehensive audit logging for all secret access, use environment-specific configuration with clear separation, implement automated secret rotation, and establish a clear process for emergency access with break-glass procedures. Never commit secrets to source control, and use pre-commit hooks and scanning tools to prevent accidental exposure.

Best Practices
  • Use YAML pipelines for version control and better collaboration
  • Integrate with Azure Key Vault for secrets management
  • Enable branch policies for code quality and security
  • Use pipeline templates for reusability across projects
  • Implement proper artifact retention policies
  • Use service connections with minimal required permissions

Detailed Overview

Azure Functions is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure. Functions can be triggered by various events and support multiple programming languages including C#, JavaScript, Python, and Java.

Azure CLI Commands
# Create storage account for Function App
az storage account create \
  --name mystorageaccount \
  --resource-group myResourceGroup \
  --location westeurope \
  --sku Standard_LRS

# Create Function App with Consumption plan
az functionapp create \
  --resource-group myResourceGroup \
  --consumption-plan-location westeurope \
  --runtime dotnet \
  --functions-version 4 \
  --name myFunctionApp \
  --storage-account mystorageaccount

# Create Function App with Premium plan
az functionapp create \
  --resource-group myResourceGroup \
  --plan myPremiumPlan \
  --runtime dotnet \
  --functions-version 4 \
  --name myFunctionApp \
  --storage-account mystorageaccount

# Enable managed identity
az functionapp identity assign \
  --name myFunctionApp \
  --resource-group myResourceGroup
Bicep Code Snippet
@description('Name of the Function App')
param functionAppName string

@description('Location for resources')
param location string = resourceGroup().location

@description('Storage account name')
param storageAccountName string

@description('App Service Plan SKU')
param appServicePlanSku string = 'Y1' // Consumption plan

// Storage account for Function App
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: storageAccountName
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_LRS'
  }
  properties: {
    supportsHttpsTrafficOnly: true
  }
}

// App Service Plan (Consumption)
resource appServicePlan 'Microsoft.Web/serverfarms@2023-01-01' = {
  name: 'plan-${functionAppName}'
  location: location
  kind: 'functionapp'
  sku: {
    name: appServicePlanSku
  }
  properties: {}
}

// Function App
resource functionApp 'Microsoft.Web/sites@2023-01-01' = {
  name: functionAppName
  location: location
  kind: 'functionapp'
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      appSettings: [
        {
          name: 'AzureWebJobsStorage'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
        }
        {
          name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${storageAccount.listKeys().keys[0].value}'
        }
        {
          name: 'WEBSITE_CONTENTSHARE'
          value: toLower(functionAppName)
        }
        {
          name: 'FUNCTIONS_EXTENSION_VERSION'
          value: '~4'
        }
        {
          name: 'FUNCTIONS_WORKER_RUNTIME'
          value: 'dotnet'
        }
      ]
      ftpsState: 'FtpsOnly'
      http20Enabled: true
      minTlsVersion: '1.2'
    }
    httpsOnly: true
  }
  identity: {
    type: 'SystemAssigned'
  }
}

// Function App source code (C# example)
// TimerTriggerFunction.cs
/*
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

public static class TimerTriggerFunction
{
    [FunctionName("TimerTriggerFunction")]
    public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger log)
    {
        log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
    }
}
*/
Senior-Level Q&A

Q: When should you choose serverless functions over containerized applications or traditional VMs, and what are the architectural trade-offs?

A: Serverless functions excel for event-driven workloads, sporadic traffic patterns, and microservices with independent scaling. Choose containers when you need consistent performance, longer execution times, specific runtime environments, or complex dependencies. VMs suit legacy applications or when you need full OS control. Trade-offs: serverless offers cost efficiency for variable workloads but introduces cold starts and vendor lock-in; containers provide portability and consistent performance but require orchestration overhead; VMs offer maximum control but higher operational burden. Consider factors like execution time limits, state management needs, cost predictability, and team expertise.

Q: How do you design a serverless architecture that handles high-volume, mission-critical workloads with strict SLAs?

A: Use Premium or Dedicated hosting plans to eliminate cold starts, implement circuit breakers and retry policies with exponential backoff, design for idempotency to handle retries safely, use Durable Functions for complex workflows with state management, implement distributed tracing and comprehensive monitoring, design for horizontal scaling with stateless functions, use message queues for decoupling and buffering, implement health checks and graceful degradation, establish SLOs and error budgets, and have fallback mechanisms. Consider multi-region deployment for disaster recovery and use CDN/caching layers to reduce function invocations.

Q: What strategies do you employ to optimize costs in a serverless architecture while maintaining performance and reliability?

A: Right-size function memory allocation (directly impacts CPU and cost), optimize code execution time and minimize dependencies, use connection pooling and singleton patterns for external resources, implement intelligent caching strategies, choose appropriate hosting plans (Consumption for variable, Premium for predictable), use event-driven architecture to avoid polling, batch operations where possible, implement proper logging levels to reduce telemetry costs, use reserved capacity for predictable workloads, monitor and alert on cost anomalies, and regularly review and optimize based on actual usage patterns. Balance between over-provisioning (cost) and under-provisioning (performance).

Best Practices
  • Choose the right hosting plan (Consumption, Premium, or Dedicated) based on workload
  • Use managed identities for secure access to Azure resources
  • Implement proper error handling and retry policies
  • Use Application Insights for monitoring and diagnostics
  • Optimize function code to reduce execution time and costs
  • Use environment-specific configuration via Application Settings
  • Implement proper logging and structured logging practices

Detailed Overview

An Azure App Service Plan defines a set of compute resources for running App Service apps. It determines the region, number of VM instances, size of VM instances, and pricing tier. Multiple apps can share the same App Service Plan, allowing cost optimization.

Azure CLI Commands
# Create App Service Plan
az appservice plan create \
  --name myAppServicePlan \
  --resource-group myResourceGroup \
  --sku B1 \
  --is-linux \
  --location westeurope

# Create Web App in the plan
az webapp create \
  --resource-group myResourceGroup \
  --plan myAppServicePlan \
  --name myWebApp \
  --runtime "DOTNET|6.0"

# Configure auto-scaling
az monitor autoscale create \
  --name myAutoscaleSettings \
  --resource-group myResourceGroup \
  --resource /subscriptions/{sub-id}/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan \
  --min-count 2 \
  --max-count 10 \
  --count 2

# Create deployment slot
az webapp deployment slot create \
  --name myWebApp \
  --resource-group myResourceGroup \
  --slot staging

# Scale App Service Plan
az appservice plan update \
  --name myAppServicePlan \
  --resource-group myResourceGroup \
  --sku S1
Bicep Code Snippet
@description('Name of the App Service Plan')
param appServicePlanName string

@description('Name of the Web App')
param webAppName string

@description('Location for resources')
param location string = resourceGroup().location

@description('SKU for App Service Plan')
@allowed(['B1', 'B2', 'B3', 'S1', 'S2', 'S3', 'P1V2', 'P2V2', 'P3V2', 'P1V3', 'P2V3', 'P3V3'])
param skuName string = 'S1'

@description('SKU tier')
@allowed(['Basic', 'Standard', 'Premium', 'PremiumV2', 'PremiumV3'])
param skuTier string = 'Standard'

@description('Linux or Windows')
@allowed(['Linux', 'Windows'])
param osType string = 'Linux'

@description('Runtime stack')
param runtimeStack string = 'DOTNET|6.0'

// App Service Plan
resource appServicePlan 'Microsoft.Web/serverfarms@2023-01-01' = {
  name: appServicePlanName
  location: location
  kind: osType == 'Linux' ? 'linux' : ''
  sku: {
    name: skuName
    tier: skuTier
    capacity: 1
  }
  properties: {
    reserved: osType == 'Linux'
  }
}

// Web App
resource webApp 'Microsoft.Web/sites@2023-01-01' = {
  name: webAppName
  location: location
  kind: 'app'
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      linuxFxVersion: osType == 'Linux' ? runtimeStack : null
      windowsFxVersion: osType == 'Windows' ? runtimeStack : null
      alwaysOn: true
      http20Enabled: true
      minTlsVersion: '1.2'
      ftpsState: 'FtpsOnly'
      appSettings: [
        {
          name: 'WEBSITE_ENABLE_SYNC_UPDATE_SITE'
          value: 'true'
        }
      ]
    }
    httpsOnly: true
  }
  identity: {
    type: 'SystemAssigned'
  }
}

// Deployment slot
resource stagingSlot 'Microsoft.Web/sites/slots@2023-01-01' = {
  name: '${webAppName}/staging'
  location: location
  kind: 'app'
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      linuxFxVersion: osType == 'Linux' ? runtimeStack : null
      windowsFxVersion: osType == 'Windows' ? runtimeStack : null
      alwaysOn: true
    }
    httpsOnly: true
  }
}

// Auto-scale settings
resource autoscaleSettings 'Microsoft.Insights/autoscalesettings@2022-10-01' = {
  name: 'autoscale-${webAppName}'
  location: location
  properties: {
    enabled: true
    targetResourceUri: appServicePlan.id
    profiles: [
      {
        name: 'Default'
        capacity: {
          minimum: '2'
          maximum: '10'
          default: '2'
        }
        rules: [
          {
            metricTrigger: {
              metricName: 'CpuPercentage'
              metricResourceUri: appServicePlan.id
              timeGrain: 'PT1M'
              statistic: 'Average'
              timeWindow: 'PT5M'
              timeAggregation: 'Average'
              operator: 'GreaterThan'
              threshold: 70
            }
            scaleAction: {
              direction: 'Increase'
              type: 'ChangeCount'
              value: '1'
              cooldown: 'PT5M'
            }
          }
          {
            metricTrigger: {
              metricName: 'CpuPercentage'
              metricResourceUri: appServicePlan.id
              timeGrain: 'PT1M'
              statistic: 'Average'
              timeWindow: 'PT5M'
              timeAggregation: 'Average'
              operator: 'LessThan'
              threshold: 30
            }
            scaleAction: {
              direction: 'Decrease'
              type: 'ChangeCount'
              value: '1'
              cooldown: 'PT5M'
            }
          }
        ]
      }
    ]
  }
}
Senior-Level Q&A

Q: How do you design a multi-tier application architecture that scales efficiently while managing costs and maintaining high availability?

A: Implement horizontal scaling with auto-scaling rules based on metrics (CPU, memory, queue depth, custom metrics), use load balancers to distribute traffic, design stateless application tiers to enable easy scaling, implement caching layers (Redis, CDN) to reduce backend load, use database connection pooling and read replicas, implement circuit breakers and health checks, design for graceful degradation, use deployment slots for zero-downtime deployments, implement geographic distribution for disaster recovery, and establish clear scaling policies with min/max boundaries. Balance between over-provisioning (cost) and under-provisioning (performance). Use reserved instances for predictable base load and on-demand scaling for peaks.

Q: What factors influence your decision between shared, dedicated, or isolated hosting models, and how do you optimize resource utilization?

A: Consider isolation requirements (compliance, security, performance), cost constraints, workload predictability, and operational complexity. Shared hosting suits development/testing with cost efficiency but limited control. Dedicated plans provide better performance isolation and control for production. Isolated plans offer maximum isolation for compliance-sensitive workloads. Optimize by right-sizing based on actual usage patterns, implementing auto-scaling, using reserved capacity for base load, consolidating workloads where appropriate, monitoring and alerting on resource utilization, implementing proper resource tagging for cost allocation, and regularly reviewing and adjusting based on metrics. Consider containerization for better resource utilization and portability.

Q: How do you implement a blue-green or canary deployment strategy for a high-traffic production application with zero downtime?

A: Use deployment slots (staging/production) for blue-green: deploy to staging, validate, swap slots atomically. For canary: route a percentage of traffic to new version, monitor metrics (error rates, latency, business KPIs), gradually increase traffic, rollback if issues detected. Implement health checks and readiness probes, use feature flags for gradual feature rollout, implement database migration strategies (backward-compatible changes, dual-write patterns), use traffic routing mechanisms (Application Gateway, load balancer rules), establish rollback procedures and automation, monitor comprehensive metrics during rollout, and have automated rollback triggers based on error thresholds. Test deployment procedures in non-production environments regularly.

Best Practices
  • Use separate App Service Plans for production and non-production environments
  • Enable auto-scaling based on CPU, memory, or custom metrics
  • Use deployment slots for staging and blue-green deployments
  • Configure Always On for continuous web apps to prevent cold starts
  • Monitor and optimize resource utilization to right-size plans
  • Use reserved instances for predictable workloads to reduce costs
  • Implement health checks and configure backup strategies

Detailed Overview

Azure Key Vault is a cloud service for securely storing and accessing secrets, keys, and certificates. It provides centralized secrets management, access logging, and integration with Azure services. Key Vault helps protect cryptographic keys and secrets used by cloud applications and services.

Azure CLI Commands
# Create Key Vault
az keyvault create \
  --name myKeyVault \
  --resource-group myResourceGroup \
  --location eastus \
  --enable-soft-delete \
  --enable-purge-protection

# Store a secret
az keyvault secret set \
  --vault-name myKeyVault \
  --name "DatabasePassword" \
  --value "MySecurePassword123!"

# Retrieve a secret
az keyvault secret show \
  --vault-name myKeyVault \
  --name "DatabasePassword" \
  --query value \
  --output tsv

# Grant access using managed identity
az keyvault set-policy \
  --name myKeyVault \
  --object-id <managed-identity-object-id> \
  --secret-permissions get list

# Enable diagnostic settings
az monitor diagnostic-settings create \
  --name keyvault-diagnostics \
  --resource /subscriptions/{sub-id}/resourceGroups/myResourceGroup/providers/Microsoft.KeyVault/vaults/myKeyVault \
  --logs '[{"category":"AuditEvent","enabled":true}]' \
  --workspace <log-analytics-workspace-id>
Bicep Code Snippet
@description('Name of the Key Vault')
param keyVaultName string

@description('Location for resources')
param location string = resourceGroup().location

@description('Object ID of the managed identity that needs access')
param managedIdentityObjectId string

@description('Enable soft delete')
param enableSoftDelete bool = true

@description('Enable purge protection')
param enablePurgeProtection bool = true

// Key Vault
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
  name: keyVaultName
  location: location
  properties: {
    tenantId: subscription().tenantId
    sku: {
      family: 'A'
      name: 'standard'
    }
    enabledForDeployment: false
    enabledForTemplateDeployment: true
    enabledForDiskEncryption: false
    enableSoftDelete: enableSoftDelete
    enablePurgeProtection: enablePurgeProtection
    enableRbacAuthorization: true
    networkAcls: {
      defaultAction: 'Allow'
      bypass: 'AzureServices'
    }
    accessPolicies: []
  }
}

// RBAC role assignment for managed identity
resource keyVaultSecretsUser 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(keyVault.id, managedIdentityObjectId, 'Key Vault Secrets User')
  scope: keyVault
  properties: {
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '4633458b-17de-408a-b874-0445c86b69e6') // Key Vault Secrets User
    principalId: managedIdentityObjectId
    principalType: 'ServicePrincipal'
  }
}

// Secret example
resource databasePasswordSecret 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
  parent: keyVault
  name: 'DatabasePassword'
  properties: {
    value: 'MySecurePassword123!' // In production, use secure parameters
    attributes: {
      enabled: true
    }
  }
}

// Diagnostic settings for audit logging
resource diagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
  name: 'keyvault-diagnostics'
  scope: keyVault
  properties: {
    logs: [
      {
        categoryGroup: 'allLogs'
        enabled: true
        retentionPolicy: {
          enabled: true
          days: 90
        }
      }
    ]
    workspaceId: '/subscriptions/{sub-id}/resourcegroups/{rg}/providers/microsoft.operationalinsights/workspaces/{workspace-name}'
  }
}

// Network access: Private endpoint (optional for enhanced security)
@description('Enable private endpoint')
param enablePrivateEndpoint bool = false

@description('Subnet ID for private endpoint')
param subnetId string = ''

resource privateEndpoint 'Microsoft.Network/privateEndpoints@2023-05-01' = if (enablePrivateEndpoint) {
  name: 'pe-${keyVaultName}'
  location: location
  properties: {
    subnet: {
      id: subnetId
    }
    privateLinkServiceConnections: [
      {
        name: 'pls-${keyVaultName}'
        properties: {
          privateLinkServiceId: keyVault.id
          groupIds: ['vault']
        }
      }
    ]
  }
}
Senior-Level Q&A

Q: How do you design a secrets management strategy for a large enterprise with hundreds of applications, multiple environments, and strict compliance requirements?

A: Implement a centralized secrets management service with environment-specific vaults, use managed identities for all service-to-service authentication to eliminate credential storage, implement RBAC with least-privilege principles and regular access reviews, enable comprehensive audit logging and integrate with SIEM systems, implement automated secret rotation with zero-downtime strategies, use reference-based configuration (Key Vault references) rather than embedding secrets, establish clear secret lifecycle management (creation, rotation, expiration, revocation), implement network restrictions (private endpoints, VNet integration) for enhanced security, use separate vaults per environment with appropriate security controls, establish break-glass procedures for emergency access with approval workflows, and implement automated compliance scanning and reporting. Consider hybrid approaches for on-premises integration.

Q: What are the architectural patterns and trade-offs when implementing secret rotation in a distributed system without causing service disruptions?

A: Use dual-write patterns: write new secrets alongside old ones, update consumers gradually, then remove old secrets. Implement versioned secrets with backward compatibility, use health checks to validate new secrets before full cutover, implement circuit breakers to handle rotation failures gracefully, use message queues or event-driven patterns to notify consumers of secret updates, implement idempotent rotation logic, use blue-green or canary deployment strategies for secret updates, establish rollback procedures, and monitor comprehensive metrics during rotation. Trade-offs: dual-write increases complexity and potential exposure window, but provides zero-downtime; immediate rotation is simpler but risks service disruption. Consider secret expiration policies and automated rotation schedules based on security requirements.

Q: How do you balance security, usability, and operational complexity when implementing secrets management across multiple cloud providers and on-premises systems?

A: Use abstraction layers (HashiCorp Vault, cloud-native services) to provide consistent APIs across environments, implement federated identity (OIDC, SAML) for cross-cloud authentication, use cloud-agnostic secret formats and standards, implement centralized policy enforcement and governance, use Infrastructure as Code (IaC) for consistent secret management patterns, establish clear ownership and responsibility models, implement comprehensive documentation and runbooks, use automation to reduce manual operations, balance between centralized (consistency, governance) and decentralized (autonomy, performance) approaches, and regularly review and optimize based on operational metrics. Consider hybrid cloud secret synchronization strategies and establish clear security boundaries and data residency requirements.

Best Practices
  • Enable soft delete and purge protection for production Key Vaults
  • Use managed identities for authentication instead of service principals
  • Implement least privilege access policies
  • Enable diagnostic logging for audit and compliance
  • Use Key Vault references in App Service instead of storing secrets directly
  • Implement secret rotation policies and automation
  • Use separate Key Vaults for different environments (dev, staging, prod)
  • Enable network access restrictions and private endpoints for enhanced security

Detailed Overview

Azure Pipelines is a cloud service that automatically builds, tests, and deploys code to any target. It supports continuous integration (CI) and continuous delivery (CD) for any language, platform, and cloud. Pipelines can be defined using YAML or the classic editor.

Azure CLI Commands
# Create service connection for Azure
az devops service-endpoint azurerm create \
  --organization https://dev.azure.com/MyOrg \
  --project MyProject \
  --name MyAzureServiceConnection \
  --azure-rm-service-principal-id <sp-id> \
  --azure-rm-subscription-id <sub-id> \
  --azure-rm-subscription-name "My Subscription" \
  --azure-rm-tenant-id <tenant-id>

# Create pipeline from YAML
az pipelines create \
  --name MyPipeline \
  --project MyProject \
  --organization https://dev.azure.com/MyOrg \
  --repository MyRepo \
  --branch main \
  --yaml-path azure-pipelines.yml

# Create variable group
az pipelines variable-group create \
  --name MyVariableGroup \
  --project MyProject \
  --organization https://dev.azure.com/MyOrg \
  --variables key1=value1 key2=value2

# Create environment with approval
az pipelines environment create \
  --name production \
  --project MyProject \
  --organization https://dev.azure.com/MyOrg
Bicep Code Snippet
// Note: Azure DevOps pipelines are typically defined in YAML
// Bicep is used to deploy the infrastructure that pipelines will manage
// Example: Deploying resources that CI/CD pipelines will use

@description('Resource group name')
param resourceGroupName string

@description('Location for resources')
param location string = resourceGroup().location

@description('Storage account for pipeline artifacts')
param storageAccountName string

@description('Key Vault for pipeline secrets')
param keyVaultName string

// Storage account for build artifacts
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: storageAccountName
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_LRS'
  }
  properties: {
    supportsHttpsTrafficOnly: true
    minimumTlsVersion: 'TLS1_2'
  }
}

// Key Vault for pipeline secrets
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
  name: keyVaultName
  location: location
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }
    tenantId: subscription().tenantId
    enableSoftDelete: true
    enablePurgeProtection: true
    enableRbacAuthorization: true
  }
}

// App Service for deployment target
resource appServicePlan 'Microsoft.Web/serverfarms@2023-01-01' = {
  name: 'plan-pipeline-demo'
  location: location
  sku: {
    name: 'S1'
    tier: 'Standard'
  }
}

resource webApp 'Microsoft.Web/sites@2023-01-01' = {
  name: 'webapp-pipeline-demo-${uniqueString(resourceGroup().id)}'
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      appSettings: [
        {
          name: '@Microsoft.KeyVault(SecretUri=https://${keyVaultName}.vault.azure.net/secrets/DatabasePassword/)'
          value: ''
        }
      ]
    }
    httpsOnly: true
  }
  identity: {
    type: 'SystemAssigned'
  }
}

// Pipeline YAML example (stored in repository)
/*
# azure-pipelines.yml
trigger:
  branches:
    include:
    - main
    - develop

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: MyVariableGroup
- name: buildConfiguration
  value: 'Release'

stages:
- stage: Build
  displayName: 'Build and Test'
  jobs:
  - job: BuildJob
    displayName: 'Build Application'
    steps:
    - task: UseDotNet@2
      inputs:
        packageType: 'sdk'
        version: '6.x'
    
    - task: DotNetCoreCLI@2
      displayName: 'Restore'
      inputs:
        command: 'restore'
        projects: '**/*.csproj'
    
    - task: DotNetCoreCLI@2
      displayName: 'Build'
      inputs:
        command: 'build'
        projects: '**/*.csproj'
        arguments: '--configuration $(buildConfiguration)'
    
    - task: DotNetCoreCLI@2
      displayName: 'Test'
      inputs:
        command: 'test'
        projects: '**/*Tests.csproj'
        arguments: '--configuration $(buildConfiguration) --collect:"XPlat Code Coverage"'
    
    - task: PublishCodeCoverageResults@1
      inputs:
        codeCoverageTool: 'Cobertura'
        summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml'
    
    - task: PublishBuildArtifacts@1
      displayName: 'Publish Artifacts'
      inputs:
        pathToPublish: '$(Build.ArtifactStagingDirectory)'
        artifactName: 'drop'

- stage: Deploy
  displayName: 'Deploy to Production'
  dependsOn: Build
  condition: succeeded()
  jobs:
  - deployment: DeployJob
    displayName: 'Deploy Application'
    environment: 'production'
    strategy:
      runOnce:
        deploy:
          steps:
          - task: AzureWebApp@1
            displayName: 'Deploy to Azure Web App'
            inputs:
              azureSubscription: 'MyAzureServiceConnection'
              appName: '$(webAppName)'
              package: '$(Pipeline.Workspace)/drop/**/*.zip'
              deploymentMethod: 'auto'
*/
Senior-Level Q&A

Q: How do you design a CI/CD pipeline architecture that supports multiple teams, hundreds of microservices, and different deployment strategies while maintaining consistency and governance?

A: Implement a pipeline template library with standardized stages (build, test, security scan, deploy) that teams can extend, use Infrastructure as Code (IaC) for consistent environment provisioning, establish clear branching strategies and GitFlow patterns, implement policy-as-code for governance (branch protection, required approvals, security scans), use environment promotion models (dev → staging → prod) with appropriate gates, implement feature flags for gradual rollouts, use service mesh or API gateways for traffic management, establish clear ownership and responsibility models, implement comprehensive observability (logging, metrics, tracing), and create self-service capabilities with guardrails. Balance standardization (consistency, security) with flexibility (team autonomy, innovation).

Q: What are the key considerations when implementing deployment strategies (blue-green, canary, rolling) for a distributed system with database migrations and external dependencies?

A: Database migrations require careful planning: use backward-compatible changes, implement dual-write patterns, use feature flags to control new code paths, and have rollback scripts ready. For blue-green: maintain parallel environments, use database replication or shared databases with versioning, implement health checks before traffic switch, and have automated rollback procedures. For canary: route percentage of traffic, monitor error rates and business metrics, implement gradual rollout with automated rollback triggers, and use service mesh for traffic splitting. Consider external dependency versioning, API contract compatibility, cache invalidation strategies, and message queue compatibility. Always test deployment procedures in non-production environments and have comprehensive monitoring during rollout.

Q: How do you balance speed of delivery, quality, and security in a CI/CD pipeline, and what metrics do you use to measure and improve the process?

A: Implement shift-left practices: run tests, security scans, and quality checks early in the pipeline. Use parallel execution for independent stages, implement pipeline caching to reduce build times, use quality gates (code coverage thresholds, security scan results, performance benchmarks) that can be adjusted based on risk, implement automated testing at multiple levels (unit, integration, e2e), use feature flags to decouple deployment from release, and establish clear policies for when to bypass gates (with approval). Key metrics: deployment frequency, lead time (commit to production), mean time to recovery (MTTR), change failure rate, and security vulnerability remediation time. Use these metrics to identify bottlenecks, optimize pipeline performance, and balance between speed and quality. Implement continuous improvement practices with regular retrospectives.

Best Practices
  • Use YAML pipelines stored in source control for versioning
  • Implement pipeline templates for reusability across projects
  • Use variable groups and Key Vault for secrets management
  • Enable branch policies and PR validations
  • Implement proper artifact retention policies
  • Use deployment slots for zero-downtime deployments
  • Configure approval gates for production environments
  • Implement comprehensive testing stages (unit, integration, e2e)
  • Use pipeline caching to improve build performance