A short post which might be of use to some, as it took me a while to figure it out.
I’ve been making a few changes to this site lately, one of which was to move from having the images remotely hosted in AWS S3 to having them locally in the repo. This was prompted by the availability of the Hugo page bundles feature, which I think was introduced several years ago without me noticing.
For users migrating from the “Classic” VSTS/Azure DevOps release experience, it is not entirely obvious how to set up what used to be known as Pre-deployment approvals as part of a multi-stage YAML pipeline.
Pre-deployment approvals in a classic release pipeline
The documentation about this is rather unclear, not least because it mixes together concepts from the “Classic” Release Management experience with concepts from the multi-stage YAML experience.
In the context of Azure Network Security Groups, it’s often useful to be able to specify security rules that only apply in certain environments. For example, we might have some kind of load testing tool that should only be permitted to connect to our testing environment, or we might want to restrict our public facing load balancer so that it is only able to connect to our production environment.
I’ve long been of the opinion that when faced with complicated code of uncertain semantics - and ARM Templates for networking certainly tick both of these boxes - that a good way to understand the behaviour of the code is to write tests.
Over the last decade or so, much has been made of the need to “bridge the chasm”1 between software development teams and IT operations teams. This promises a number of technological and organisational benefits, such as faster delivery cycles, fewer defects, reduced time to market, and greater profitability.
For those companies “born in the cloud”, who have never deployed anywhere other than Firebase and think that Helm is how you steer the yacht you bought with the Series C funding, what follows will be largely unfamiliar.
Prompted by some discussion on the SQL Community Slack, I thought I’d revisit this old post on the SSDT Team Blog which outlines how to filter specific objects from a dacpac deployment using the Schema Compare API.
In the past, I’ve used Ed Elliott’s filtering deployment contributor for this kind of thing, but in the interest of experimentation I thought I’d have a look at what comes “in the box”, not least because deployment contributors can, ironically, be a bit of a pain to deploy.
Or rather, not so much “non-coding” as “never-coded”.
I came across this phenomenon during a recent brush with “Enterprise Agile”.
In particular, the notion of “Agile”, or more specifically “Scrum” as a skill distinct from software development, was an entirely new one to me.
This notion has given rise to individuals, and indeed teams of individuals, who are entirely conversant - and expensively trained, by “boutique” consultancies - in the terminology and rituals of Scrum and its “Enterprise” cousins - stand-ups, grooming, planning poker, retrospectives, release trains, the all-important “velocity”, the list goes on and on.
It may have been a while coming, at least compared to Jenkins Pipeline, Travis-CI, and friends, but VSTS now offers the facility to specify your build pipeline as YAML, meaning it can be version controlled with your application code. YAML Release Management Pipelines are “on the way”, but not yet publically available.
YAML Build Definitions are currently in public preview, so you’ll need to ensure you have the feature enabled for your account.
We’re often told that the existence of a DevOps Team is something of an antipattern, or indeed “considered harmful”, but it wasn’t until I saw this in action that some of the reasons for this advice became really clear in my mind, and I thought I’d note some of them down here.
In the “bad old days”, the developers used to write code and “throw it over the fence” to the operations team, who were the custodians of the organisation’s infrastructure and responsible for all software deployment and maintenance.
A large number of words have already been written about why shipping software in smaller increments more quickly is a good thing to do; by deploying more frequently we become more practiced and our automation becomes better and less error-prone, we can “fail faster” by discovering we haven’t shipped the features our customers want, and by extension succeed faster by shipping the features our customers do want, faster and more reliably than our competitors across the street.
Config as environment variables I’m a big fan of the Twelve-Factor App “methodology”1 for building and deploying applications, and whilst much of it is geared towards web apps in Heroku-esque environments, I think the principles - or “factors” - are well worth bearing in mind when considering the delivery of other types of application.
Factor 3 of the 12 reads as follows
An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc).