Infrastructure as code is an operations practice that lets system administrators and DevOps experts make their configurations reusable.
All kinds of management - whether it is infrastructure configuration management or any other type of administration - can be done in two different ways:
All management experts will tell you that result-oriented tactics are easier to administer and more economical about spending resources. They are also more likely to get you to the actual results.
The same applies when collaborating with technology. You can give detailed orders to a computer if you understand all the different contexts for these orders to be executed. But usually computers are much better than humans when it comes to computing and comparing multiple possible solutions in different contexts. This is why it often makes much more sense to give the computer a final goal and let them make the best decisions on the way.
When we speak about deploying infrastructure in the DevOps context, there is a time and place for both approaches.
Broadly, there are two different ways for deploying infrastructure:
The manual approach can be faster when learning to know a new system, or when your main goal is to discover and try out things. Sometimes you don’t yet know how to describe your final destination. Then creating the setup by hand and adjusting the details on the go is the right way to proceed.
All graphical and command line interfaces used for manual setup are always process-oriented. The result-oriented approach is not possible here, as you’ll need to describe each task and process to the computer as you proceed.
The alternative is programming a computer to set up the infrastructure for you. The infrastructure as code approach is usually best for recurring tasks that you need to do more than once, and for everything that requires regular maintenance in the future. Having a piece of code quickly execute a perfect setup is much faster than manually entering all the data every single time. It also avoids possible errors that are easy to happen with manual data input.
Deploying infrastructure as code also gives us the possibility to prescribe the setup as result-oriented. Also known as “infrastructure as data”, it is the declarative way of describing the intended state of infrastructure. We tell the computer exactly how the results need to look like, and let it make appropriate decisions about the right path when reaching that state.
Infrastructure as code (IaC) builds a strong foundation for DevOps quality management and continuous improvement.
Usually about 70% of all resources are spent on operations and maintenance, and only 30% on innovation and development. Moving the needle towards more innovation is an obvious way to strive towards better business results, but this requires a sharp focus on continuous improvement. Make sure that your operations are extremely effective, and you will be able to spare more resources for innovation.
Modern quality management practices are based on the Deming cycle.
When your DevOps deploys IaC, it is much easier to establish and strengthen these baseline standards that get set during the Deming cycle. When each cycle is based on the results of the previous one, it is much easier to compare the changes and determine progress.
Image credits: Clarity.
Manually editing our infrastructure keeps the change workload always on the same level, or creates additional work if mistakes and variations have been introduced to the results.
Everything we already know and have learned should be delegated to a computer for optimisation and automation. This will produce consistent results that can become the standard before planning the next improvement cycle.
A stable and consistent environment lets you quickly move along with further iterations. The ability to fulfil in-house infrastructure requests almost instantly is a great upgrade for shortening the in-house lead time.
IaC is an early investment that may take a few more hours setting up, but will save you a lot of time and labour costs in the long run when scaling.
Whenever errors occur after making changes, you can exactly see what got changed, so it’s easier to troubleshoot and fix. When rollbacks are simple, the team will have less anxiety over making new changes. This really helps with keeping them motivated to move fast and still be able to preserve the service quality.
Repetitive and consistent deployments without the chance for human errors are less likely to cause security vulnerabilities. When all changes are auditable, you are able to verify in detail which version was live at any given moment, and when it was published.
IaC is self-documenting. Developers do not need to spend time on writing notes or producing additional explanatory documents. This radically lowers the need for re-learning and figuring out what was set up.
It also helps you optimise costs. When you always have a full overview of the infrastructure in use, it is easy to ensure you’re using exactly the resources what you need at the best price point.
To use the result-oriented approach you need to choose infrastructure components that allow declarative configurations. Not every hardware setup will have a suitable API that will actually let you describe the goals instead of the process.
For example, out of the box hardware never comes with their own API at all. This is a challenge when building your own infrastructure - you will need to figure out how to implement an API for your infrastructure configuration management.
Public cloud services like Amazon AWS, Google Cloud and Microsoft Azure are specifically designed for management over API. Because of this, their native built-in API capacity is always better than a system where the API gets bolted on after unboxing the hardware on location.
This is why all public cloud services support declarative configurations by default. They act as a central vendor and all their services support a single universal API. This will give you an uniform experience when managing the services. You also don’t need to buy additional licences like you would for physical devices in your data centre.
It can be surprisingly challenging to provide comparable options when setting up your own infrastructure. Without enough in-house skills and resources, going down the lane of self-building can eventually make you give up and take the path of process-oriented micromanagement. While at a glance it may feel less expensive to own your infrastructure, the building process will absorb all of your most talented engineers, and provide mediocre results in terms of security and reliability when compared to the public cloud. 3. Use orchestrating tools to enable IaC
While all public cloud services are designed for management over API to enable IaC, most APIs can not support declarative configurations by themselves at all.
Here’s where orchestrating tools come to play. They are meant to implement the declarative configuration by understanding the state of your current infrastructure, and taking it to the intended state through commands sent through API. This lets you radically simplify your code-base.
Without an orchestrating tool, you would need to develop this code from scratch, along with all of the error handling. Writing out the full logic for interpreting instructions is a lot of code, so it only makes sense to use a proven and validated environment where the bugs for most common use cases are most likely to be fixed already.
We always recommend consuming the orchestrating tools as a managed service from a cloud provider, instead of setting up from scratch. This lets you focus on the value-adding applications and use IaC practices that really improve the service quality for the end customers.
Currently the de facto standard for orchestrating tools is Kubernetes - an open-source system for automating deployment, scaling and management of containerised workloads and services.
While it is not the only way to embrace the practices of infrastructure as code, it is surely the most common way to get all of your technologies under control within a single API.
The scope of technologies that could be using IaC is large and wide. From resource-computing networks to software-managing application instances, each of these may have their own management methods. To keep them all under your control, you either need to choose them in a way that fits under a common API, or you can find a middle-man software that provides a common API for all of these technologies.
This is the main value of Kubernetes - it can provide a common resource model and API for a wide range of back-end and infrastructure components. Whatever your stack is, it is highly likely that Kubernetes already covers all of your existing technologies.
Keep in mind that Kubernetes is just a tool. Implementing Kubernetes should never become a goal itself. Always make sure that by adding another tool in your stack you are actually creating value, not just adding complexity. Each new tool needs time and effort for implementation, and lots of resources for adjustment.
If you work on active software development, IaC and Kubernetes will surely speed up your work processes and lower the risk for mistakes. But they are a heavy investment in skills and knowledge. Before you start, be convinced that upgrading your practices compensates the original investment of training and going through the mindset change.
If you think IaC could be useful for your organisation, do not hesitate to get in touch with Entigo to see how we can help! The infrastructure as code approach is a core for all Entigo services. We help organisations with cloud migration and modernisation, but can also provide more targeted services for companies of different sizes and growth stages.