Contents

Deploy Windows Services to a Virtual Machine with Azure DevOps

Lately I have been building a socket server for one of my clients, this has taken the form of a Windows Service since there is no Azure PaaS offering that allows you to accept raw TCP socket connections. The socket server is for an IOT device to send raw TCP packets to for processing.

As part of the project I’ve also been working on the automation side of things. This is an important aspect of the project, since I don’t want to waste time building the project locally, zip, connect to virtual machine, copy file across, unzip, update Windows Service over and over again. Not when machines are much better at repetitive tasks like this.

In this post I’m going to share how to deploy a Windows service from Azure DevOps to a virtual machine.

The tale of the two pipelines

For this project, I decided to split the continuous integration (CI) and continuous deployment (CD) pipelines into two separate pipelines. The CI pipeline will be responsible for building the code, running tests and producing a build artifact for the CD pipeline to deploy.

Continuous Integration (CI) Pipeline

Since this post is about deploying to a virtual machine rather than how to set up and configure a CI pipeline I’m going to gloss over the details, but for completeness sake here is a simplified version of my CI pipeline.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
name: $(MajorMinor).$(Revision)

trigger:
  batch: true
  branches:
    include:
    - main

pr:
  autoCancel: true
  branches:
    include:
    - main

pool:
  vmImage: 'windows-latest'

stages:
- stage: BuildTestPack
  displayName: 'Build, test and pack artifacts'

  jobs:
  - job: Build

    steps:
    - checkout: self
      clean: true
      fetchDepth: 1
      persistCredentials: true
    
    - task: DotNetCoreCLI@2
      displayName: 'Restore NuGet packages'
      inputs:
        command: 'restore'
        projects: '**/*.csproj'
        feedsToUse: 'select'
        includeNuGetOrg: true
    
    - task: DotNetCoreCLI@2
      displayName: 'Run tests'
      inputs:
        command: 'test'
        projects: '$(Build.SourcesDirectory)\SocketServer\tests\**\*.csproj'
        arguments: '/p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=./MyCoverage/'
        testRunTitle: 'Tests'

    - task: PublishCodeCoverageResults@1
      displayName: 'Publish code coverage results'
      inputs:
        codeCoverageTool: Cobertura
        summaryFileLocation: '$(Build.SourcesDirectory)/**/MyCoverage/coverage.cobertura.xml'
        failIfCoverageEmpty: true

    - task: DotNetCoreCLI@2
      displayName: 'Publish SocketServiceHost'
      inputs:
        command: 'publish'
        publishWebProjects: false
        projects: '$(Build.SourcesDirectory)\SocketServer\src\SocketServiceHost\SocketServiceHost.csproj'
        arguments: '--configuration Release --output $(Build.ArtifactStagingDirectory)'
        zipAfterPublish: false
    
    - task: PublishBuildArtifacts@1
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)'
        ArtifactName: 'drop'
        publishLocation: 'Container'

Configuring virtual machines for deployments

The CD pipeline is the interesting part of this blog post, but before we get into that there is a bit of prep work that needs to be done. Since we are deploying to a virtual machine it will need the Azure DevOps agent running so that our DevOps Project can use it as a resource. This will then allow us to use it as a target in our pipeline.

  1. Go to the Environments page in Azure DevOps
    /deploying-windows-services-devops/1-select-environments.png
    List all Azure DevOps Environments
  2. Click New Environment, I’m going to call this one Test, and select Virtual Machines under Resources. Then click Next.
    /deploying-windows-services-devops/2-create-new-environment.png
    Create new Azure DevOps Environments
  3. With the virtual machine resource created, DevOps then generates a Registration script
    /deploying-windows-services-devops/3-registration-script.png
    Registration script for Azure DevOps Agent
  4. Copy this script to the clipboard and then you’ll need to connect to the virtual machine and run it in an administrative powershell console.
    /deploying-windows-services-devops/4-install-devops-agent.png
    Installing the Azure DevOps agent
  5. With the agent installed, we can return to the DevOps project and see the newly created environment - then repeat the above steps to also create the Production environment
    /deploying-windows-services-devops/5-environments.png
    Newly created Azure DevOps environments

Clicking into an environment shows the resources and deployments assosicated with that environment.

/deploying-windows-services-devops/6-test-environment-resources.png
Listing the resources in the Test environment

The list of deployments will currently be empty, since the environment has only just been created.

Continuous Deployment (CD) Pipeline

With the CI Pipeline in place and the virtual machine configured for the two environments - Test and Production - the next part is the CD Pipeline.

Since the CD Pipeline is going to follow the multi-stage pipeline pattern, I’ve split my pipeline into multiple yaml files.

This is the main cd_pipeline.yaml script which is the entry point for Azure DevOps.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
resources:
  pipelines:
  - pipeline: ci_pipeline
    source: 'CI - My CI Pipeline'
    trigger: none

trigger: none

pool:
  vmImage: $(VM_IMAGE)

variables:
  - template: templates/variables.yml

stages:
  - template: templates/deploy.yml
    parameters:
      environmentName: ${{ variables.ENVIRONMENT_NAME_TEST }}
      socketServerPort: ${{ variables.SOCKET_SERVER_PORT_TEST }}
  - template: templates/deploy.yml
    parameters:
      environmentName: ${{ variables.ENVIRONMENT_NAME_PRODUCTION }}
      socketServerPort: ${{ variables.SOCKET_SERVER_PORT_PRODUCTION }}

This is the templates/variables.yml script

1
2
3
4
5
6
7
8
variables:
  VM_IMAGE: windows-latest

  ENVIRONMENT_NAME_TEST: Test
  SOCKET_SERVER_PORT_TEST: 6667

  ENVIRONMENT_NAME_TEST: Test
  SOCKET_SERVER_PORT_PRODUCTION: 6668

There are plenty of other ways that this CD pipeline could be written and the overall objective accomplished, this is the approach that I’ve seen that works and the one I’ve decided to adopt.

In my actual pipeline there are a few other variables which I haven’t included here, some are common between the two environments and others are environment specific - but I will save that for another post.

The cd_pipeline.yaml file above starts off by defining a pipeline resource:

1
2
3
4
5
resources:
  pipelines:
  - pipeline: ci_pipeline
    source: 'CI - My CI Pipeline'
    trigger: none

This allows us to consume the artifacts generated and published by the CI pipeline in this pipeline. For more info see this Microsoft docs article.

The key thing to note from this snippet of yaml is that the source value - in this case CI - My CI Pipeline - is the name of the CI Pipeline, as it appears in the DevOps Pipelines.

In the next section I’m specifying trigger: none since I only want this pipeline to run on demand, followed by specifying the vmImage using a variable $(VM_IMAGE).

The following section references the variables.yaml (above) where all the variables for the pipeline are defined.

1
2
variables:
  - template: templates/variables.yml

The following section is where things start to get interesting,

1
2
3
4
5
6
7
8
9
stages:
  - template: templates/deploy.yml
    parameters:
      environmentName: ${{ variables.ENVIRONMENT_NAME_TEST }}
      socketServerPort: ${{ variables.SOCKET_SERVER_PORT_TEST }}
  - template: templates/deploy.yml
    parameters:
      environmentName: ${{ variables.ENVIRONMENT_NAME_PRODUCTION }}
      socketServerPort: ${{ variables.SOCKET_SERVER_PORT_PRODUCTION }}

Here is where the stages for the pipeline are defined, the only difference between the two stages is the value of the environmentName parameter being passed in. The advantage of following this kind of pattern is that the templates can remain generic and therefore able to be reused.

Pipeline templates

The first template is templates/deploy.yml, this template takes in the two parameters environmentName and socketServerPort, which are then subsequently passed through to the templates/deploy_socket_server.yml template.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
parameters:
- name: environmentName
- name: socketServerPort

stages:
- stage: ${{ format('{0}_Deployment', parameters.environmentName) }}
  displayName: ${{ format('Deploy to {0}', parameters.environmentName) }}
  jobs:
  - template: deploy_socket_server.yml
    parameters:
      environmentName: ${{ parameters.environmentName }}
      socketServerPort: ${{ parameters.socketServerPort }}

The interesting thing to note here is that the stage and displayName are dynamically generated based off the passed in environmentName parameter value. This is a trick I learnt from a fellow engineer.

The other thing is that deploy.yml only had one job, which is also a reference to a template, it could have been inlined as a series of tasks but I like encapsulating all the logic related to deploying the socket server into its own template.

The next template is templates/deploy_socket_server.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
parameters:
- name: environmentName
- name: socketServerPort

jobs:
- deployment: ${{ format('SocketServer_{0}_Deployment', parameters.environmentName) }}
  displayName: ${{ format('SocketServer deployment to {0}', parameters.environmentName) }}
  environment:
    name: ${{ parameters.environmentName }}
    resourceType: VirtualMachine
  strategy:
    runOnce:
      deploy:
        steps:
        - task: PowerShell@2
          inputs:
            targetType: 'inline'
            script: |
              Set-PSDebug -Trace 1
              Write-Host "EnvironmentName = ${{ parameters.environmentName }}"
              Write-Host "##vso[task.setvariable variable=EnvironmentName]${{ parameters.environmentName }}"
              Write-Host "##vso[task.setvariable variable=SocketServerConfig.Port]${{ parameters.socketServerPort }}"              
        - task: FileTransform@1
          displayName: Update appsettings.json
          inputs:
            folderPath: '$(Agent.BuildDirectory)\ci_pipeline\drop\SocketServiceHost'
            targetFiles: '**/appsettings.json'
            fileType: json

        - task: PowerShell@2
          inputs:
            targetType: 'inline'
            script: |
              Set-PSDebug -Trace 1
              Write-Host "EnvironmentName = ${{ parameters.environmentName }} $(Build.BuildNumber)"
              Write-Host "BuildNumber = $(Build.BuildNumber)"
              Copy-Item -Path "$(Agent.BuildDirectory)\ci_pipeline\drop\SocketServiceHost" -Destination "C:/AzDoDeployments/${{ parameters.environmentName }}/$(Build.BuildNumber)" -Recurse
              $serviceName = "SocketServer_${{ parameters.environmentName }}"

              Write-Host "Stopping current service $serviceName"
              sc.exe stop "$serviceName"
              Write-Host "Updating binPath to C:/AzDoDeployments/${{ parameters.environmentName }}/$(Build.BuildNumber)/SocketServiceHost.exe"
              sc.exe config "$serviceName" binPath="C:/AzDoDeployments/${{ parameters.environmentName }}/$(Build.BuildNumber)/SocketServiceHost.exe"
              Write-Host "Starting service $serviceName"
              sc.exe start "$serviceName"              

            errorActionPreference: 'silentlyContinue'

Once again this template takes in the two parameters environmentName and socketServerPort.

That is followed by the jobs and underneath that I’m using the special deployment job, this is specifically recommended by Microsoft.

In YAML pipelines, we recommend that you put your deployment steps in a special type of job called a deployment job. A deployment job is a collection of steps that are run sequentially against the environment. A deployment job and a traditional job can exist in the same stage.

This job type provides a few benefits, like having explict deployment strategies (runOnce, rolling and canary).

The environmentName parameter is used to specify which Environment to use, this needs to match the environments that were configured above.

The strategy being used here is the runOnce approach, since I only have a single virtual machine that I’m deploying the socket server to.

Then follows the steps, before the steps are run the agent will download the artifacts from the pipeline specified in the top-level cd_pipeline.yml file.

  1. Powershell task - this task sets up some pipeline variables which will be used in the subsequent step, it uses the ##vso[task.setvariable variable=varName;]varValue" syntax
  2. FileTransform task - this is a built in task for variable substitution on configuration files. In this case it is on the appsettings.json file, and will update any values in that file who names match a variable defined in the pipeline.
  3. Powershell task - this task stops/updates/starts the Windows Service.

It took a while to get to it, but there it is - a Windows Service deployed to a virtual machine via Azure DevOps.

This is how our multi-stage pipeline run looks in the portal

/deploying-windows-services-devops/7-multi-stage-run.png
Successful deployments to Test and Production environments

And now on the Environments page we can see the latest release for an environment.

/deploying-windows-services-devops/8-environments-with-deployments.png
Azure DevOps Environments after deployments have run

Taking it further

There’s plenty more that could be done with this pipeline, here are a few ideas:

  • Approval gates between stages - this would mean an explicit approval would be required before deploying to the Production environment
  • A check to ensure that deployments to the Production environment can only occur from the main branch
  • A pipeline parameter to enable skipping over the deployment to Production

🍪 I use Disqus for comments

Because Disqus requires cookies this site doesn't automatically load comments.

I don't mind about cookies - Show me the comments from now on (and set a cookie to remember my preference)