There are two fundamental approaches of how to design automation processes, declarative (also often known as model) and imperative (also often known as workflow or procedural).
The purpose of this article is to explain the principles of each, explore a little of how they work, try to highlight some of the strengths and weaknesses and explore how they can be used together. I don’t think one size fits all—based on your requirements and some of what is explored below you may use a combination of the two, even if some of the tooling you use leans toward one way or the other.
Declarative/Model-Based Automation
The fundamental of the declarative model is that of desired state. The principle is that we declare, using a model, what a system should look like. The model is “data driven,” which allows data, in the form of attributes or variables, to be injected into the model at runtime to bring it to a desired state. A declarative process should not require the user to be aware of the current state; it will usually bring it to a required state using a concept known as idempotent.
Idempotent
What this basically means is that if you deploy version 10 of a component to a development environment and it is currently at version 8, changes 9 and 10 will be applied. If you deploy the same release to a test environment where version 5 is installed, changes 6 through 10 will be applied, and if you deploy it to a production system where it has never been deployed, it will apply changes 1 through 10. Each of the deployment processes brings it to the same state regardless of where they were initially. The user, therefore, does not need to be aware of the current state.
Maintaining State
Once we have a model for our infrastructure or applications, any future changes required are made to the model. This means that as a model changes over time—and may have a current, past or future state. This raises the question of how can a model be applied to a target system, how can it know what changes to make? This is not always easy to answer and is not the same for every platform and runtime you are managing.
State Management
It’s hard to cover a broad subject like this without making some generalizations—and there are some in this section. There are broadly three methods used to keep an environment or system and its desired state models in line:
Maintain an inventory of what has been deployed. This where we maintain the state of what has been deployed, and what is in the desired state of a release, and only apply the difference.
For example:
- My release or ‘desired state’ contains database updates: 1.sql, 2.sql, 3.sql, 4.sql, 5.sql
- My example test system has had 1.sql, 2.sql, 3.sql already deploy (recorded in my inventory)
- Automation determines to only run 4.sql and 5.sql
Use Case: This type of state management is particularly useful for things such as databases or systems that requires updates from a propriety API where the state of the target cannot always be easily determined.
Validate/compare the desired state with what has been been deployed. As a simple illustration, we may want to determine if important_file.xml from my desired state deployment exists on the target system and whether it looks the same as the one in my package.
- If it doesn’t exist – create it / copy it, etc.
- If it does, determine whether it is different
- If it is different, how do I update it (converge, copy over, etc.)?
Use Case: Probably the most widely used implementation is to simply manage the files that many cloud-native runtimes use to store configuration definitions, such as ssh keys for a server or xml files to manage a java runtime such as tomcat. This also can be used to maintain runtimes that are managed via an API; for instance, What is my heap or config item for my WebSphere Server? Is this what it should be? If it is doing nothing, set it to correct desired state value.
Just make it so. There are occasions when we don’t really care about what exists. The most obvious example here is anything that is stateless, such as a container. There are probably very few example of why you would want to run any automation within a container to change its state; you would just want to instantiate a new one that has the new configuration you require.
Imperative/Procedural/Workflow-Based Automation
The fundamental of what often is referred to as workflow, procedural or imperative approach is that a series of actions are executed in a specific order to achieve an outcome. For an application deployment, this is where the process of “how the application needs to be deployed” is defined, and a series of steps in the workflow are executed to deploy the entire application.
A standard example might include:
- Some pre-install/validation steps
- Some install/update steps
- Finally, some validation to verify what we have automated has worked as expected.
The often-cited criticism of this approach is that we end up with lots of separate workflows that layer changes onto our infrastructure and applications—and the relationship between these procedures are not maintained. The user, therefore, needs to be aware of the current state of a target before they know which workflow to execute. That means the principle of desired state/idempotent is hard to maintain—and each of the workflows are tightly coupled to the applications.
The reality here is that the situation is not black and white. Puppet is an example of what is seen as a Declarative automation tool, while Chef is said to be Imperative. Do they both support concepts of desired state and are idempotent? The answer is, of course, Yes. Is it possible to use a workflow tool to design a tightly coupled release that is not idempotent? The answer is, again, yes.
What are the Benefits of Workflows?
The benefit of using a workflow is that we are able to define relationship and dependencies for our units of automation—and orchestrate them together. In the example above, we can create a workflow to perform pre-install, installation and post-install steps, perhaps adding additional conditional steps if it’s in a particular environment such as production (disable a monitoring system, additional security configurations, etc.).
Procedural workflows also allow us to, for example, deploy component A & B on Server1, then deploy component C or Server2 and then continue to deploy D on Server1. This give us much greater control in orchestrating multi-component releases.
It is often suggested that the choice is between Imperative- and Declarative-Based Deployment. However, these two approaches are not mutually exclusive.
Workflow for Applications
One of the issues in the world of DevOps is the lack of a consistent lexicon or terminology. Just ask someone who works in the industry what their definition of an environment is or what the term application means to them. My guess is you will get differing answers.
To me, an application is made up of components and a release is generally in the form of an application that is created from multiple versioned components. Now, depending on the organization you work in, you may or may not recognize the definition I have given—and if everything in your world is a stateless loosely coupled microservice on a cloud native platform, then this may be the case. If you work in a more traditional enterprise organization, you will more likely recognize the requirement for multi-component release orchestration, requiring procedural or workflow processes to handle the coordination and dependencies between the different components.
Model for Components
When it comes to deploying a specific component, I think it’s hard to argue against the benefits of using a Declarative or model-based approach. The concept of something being Idempotent—I can deploy any version to any target system if it has never been deployed to before, if we want to introduce a new change to an already existing environment or even if we want to roll a unit of automation back—is hard to argue against for its consistency and simplicity.
Imperative Orchestration & Declarative Automation
The future of IT Operations is doubtless based on things such as stateless microservices running in fungible containers, using Kubernetes watchers to trigger event-driven automation, much as the future of retail is for Amazon to deliver milk to my IoT-connected refrigerator using a drone.
That being said, most of us still live in the present and have the need, for some time to come, at least, to address some of the automation and orchestration challenges that exist in today’s IT landscape. With this in mind, a combination of Declarative (or model-driven) units of automation, coordinated by Imperative (or workflow-based) Orchestration can be used to achieve this. An imperative/workflow-based orchestrator also will allow you to executive not only declarative automation, but also autonomous imperative units of automation should you need to.
My recommendation is that an application workflow defines the order and dependencies of components being deployed. The Declarative Model for a component determines what action needs to be taken to bring a target system into compliance with the model.
It is hard to underestimate the importance of desired state—it has changed the way we look at system and application updates and drastically improved the visibility, audit and control of IT systems. Any progressive organization looking to move to a continuous delivery/deployment model or implement things such as autoscaling or “deploy and destroy” for development or testing purposes will benefit from the use of declarative/model-based automation.
However, a model-based approach does not answer all the difficult questions around orchestration. This, in my opinion (for the moment, at least) is best achieved using Imperative, Procedural or Workflows processes.