A Primer to Help You Automate to Innovate
In this final installment of our DevOps series, we explore the six areas of our DevOps model, providing guidance on process standardization, tool identification and performance indicators necessary to drive effective execution control.
If you took action after reading our last article, Rise of the Robots — Robotic Process Automation, you’ve moved through the “Planning Phase in the DevOps journey” — your current state is known, your prioritized roadmap is drafted and, most importantly, you’ve accepted and are ready to dive into implementation.
1. Continuous Development
Before we move to specific technology solutions to automate how we integrate and deploy code to your environments, first, ensure that your current software development methods are sound.
Assuming that you mapped out your Product or Software Development Lifecycle effectively during the Planning phase, you should see how well your releases are planned, how effective your iteration commitments are, and what your baselines are for your key performance indicators. You may have even defined a simpler approach (Future State) to eliminate the waste identified during the exercise. If you didn’t, you will want to do that first since you’ll need a model that benefits from Continuous Services.
When it comes to the process for successful Continuous Delivery, ensure that you’re following the appropriate agile techniques for your product release and iteration plans. Driven by your Product Roadmap, you should have a healthy, unrefined (proposed) and refined (accepted), backlog of requirements for implementation. Your requirements are the backbone of continuous delivery. Without them, it’s tough to project what work you need to plan and continuous delivery isn’t feasible if the current iteration clouds your view.
Once requirements are solid, it’s time for execution control and that entails a well-configured Application Lifecycle Management (ALM) tool like Microsoft’s Team Foundation Server or HP’s Quality Center. These leverage the best Agile or CMMI template to organize the team’s tasks. Consider this your source of truth for implementation and measurement across the DevOps model. Used properly, this is your most powerful ally. Used incorrectly, however, it’s another nuisance tool that no one will understand how to use.
Last, but probably the most important step, institute real-time performance metrics deployed as Visual Management and delivered through your ALM dashboards, showing team and individual contributions made to iterations. This is essential to effective commitment delivery and will help you identify impediments affecting the team’s burn-down rate (work left to do vs. time) and their velocity (rate of progress). Installing visual management next to every agile crew is an effective way to celebrate success, promote teamwork, and can also act as an effective tool to drive daily stand-up activities.
2. Continuous Integration (CI)
Without the ability to continuously integrate developed code, there is no continuous delivery. If code is the blood of the development engine, CI is the intravenous system, providing a vessel to package, validate and deploy code to the staging environments for action.
Selecting the right CI Server, to centralize the build environment, is fundamental to success. There are many options — some offering configuration data in easily accessible files where job creation can be easily scripted. Whether you’re working with Virtual Machines or Containers, like Docker, we recommend Jenkins paired with tools like Puppet and Chef, which provide more advanced support to stand up the required server instances.
Instrumentation at the job level is key here, ensuring the appropriate notifications are set up to roll out the next phase of work and avoid expensive wait times. We want to avoid the QA team waiting on the next build while the Deployment Engineer wraps up for the day. Failures like this can lead to unpleasant after-hours troubleshooting for your staff and erode productivity, resulting in missed commitments and avoidable backlog burn-up. If your team is spread across different time zones this is critical to nail down.
3. Continuous Testing
Now that you have your process stabilized and your tools enabling execution control, it’s time to tackle testing.
You may already have some automation coverage, but the traditional approach taken to testing won’t likely keep pace with your new development cycle times. This leads to delayed releases or worse — gaps in coverage and costly post-release defects.
When implementing a continuous testing solution, safeguard that you have the right balance of coverage and speed to protect the product as it moves between deployment gates. This requires taking your automation beyond the QA execution layer.
Before considering full-scale test automation, confirm that your environments are streamlined across the delivery architecture. Having a test suite that runs in QA and breaks in Production won’t help. Discern that the code being deployed will work across all staging environments through to Production.
Once your environment is set, look at existing manual test cases to determine if the right end-to-end coverage is in place. Think beyond regression. If you can automate 95 percent of the value-add test scripts, you can spend manpower in exploratory testing and analysis, which provides better insights into release viability. And, since automation was never intended to replace your people, it can augment what they do, magnifying efficiency and delivering impressive results. But, get the standardization wrong and you’ll see a higher count of the same problems faced before automation.
For tooling, check that automation runs are integrated into the ALM analytics engine. This keeps your “source of truth” clean and provides a consolidated approach to analytics across the lifecycle.
Finally, there are some powerful tools disrupting the industry like Kobiton. It provides a low-code automation engine, as well as multi-modality interfaces for testing in the cloud. Benefits from these tools go beyond cycle time improvements and positively impact traditional Capital Expenditure where you can cut your consumer electronics spend each quarter. It’s worth the time to investigate.
4. Continuous Deployment
At this point, we’ve got a sound lifecycle, we are seamlessly building committed code into releases across staging environments, and we are able to obtain quick feedback on performance and accuracy with continuous testing solution. But, how does continuous deployment help?
Simply put, this is ‘THE value part‘ – where all of your hard work designing, building, integrating, and validating code across staging environments is delivered to the customer. Inventory is the single worst waste type in engineering.
Leveraging the same technology used to build the CI solution, your live application in Production is updated as well.
And, like every other phase of the DevOps model, instrumentation plays an important role. Yet, for Continuous Deployment, that role is critical since your customer can now see how the new features perform.
This is the point where each piece of the DevOps model plays in concert to validate what you released to the customer. Any failure discovered here should suitably abort the deployment and trigger a real-time intervention to determine the source of failure.
Of course, if you are maintaining a Continuous Testing model in parallel to the Product feature set, and environments are set appropriately, this should never fail. If it does, the root cause analysis should discover a failure upstream in the process. Most likely, the environments have not been properly maintained or automation is no longer in sync with the feature set.
5. Continuous Monitoring
If we have applied the right Systems Thinking to building out the DevOps components up to this point, it’s time to reap the rewards.
Even though Continuous Monitoring is near the end of our DevOps model, ongoing monitoring is key as you built your key performance indicators for each area of the model. As the saying goes, “If it’s not measured, it’s not managed.”
Most of the delivery intelligence should be housed in the ALM, plus you will have additional insights to gather from system logs tied to Security and Performance. And, because most ALM tools focus on the execution layer of delivery and less on the Continuous Improvement side, it’s important to periodically review insights and create corrective, preventative actions that are be assigned to functional leaders. This may require a Quality Management solution.
6. Continuous Feedback
As you refine and add to the Product backlog, reach out frequently and openly to customers about their experiences. Plus, gain insights from the Production layer automation regarding any operational exceptions that need to be addressed. These inputs, together, can help balance needs and wants as you plan future releases.
An application delivered faster with higher quality does not guarantee that you will achieve the desired business outcomes — how the customer benefits from those features determines value. To stay relevant in the ever-evolving solution space, you’ll want real-time insights into their needs as their experience with your platform evolves.
Traditionally, organizations have sent questionnaires or hosted customer forum groups to collect such insights, but there are integrated ways to do this today — living within your product. We are partial to Qualtrics but there are many user experience tools to choose from, ensuring there is balance in your product Design Thinking. Plus, with the flexibility you have created through this process, the team should have room for those projects that tend to be ignored – like innovation.
Learn more about DevOps Transformation and to discuss these ideas with our team.