4th April 2018

Recently, the team and I finished another DevOps project, so we wanted to share with you the challenges we faced during our journey. Usually, at MagenTys the traditional DevOps engagement doesn’t take longer than 6-8 weeks depending on the complexity of the solutions and the DevOps practices needed to enforce. The DevOps level of maturity within the team is also crucial at this point to accomplish the project on time, as DevOps doesn’t consist of just implementing a CI – CD pipeline with some tools and to display some green builds on the screen, it’s also about the adoption of patterns and practices from the development team.

When we are deciding on the right tools to use, it’s important to get feedback from every team member and to also check with other teams as to what are the main and desired tools they use, as we need to keep consistency across teams for standardising these practices with more accuracy. There are plenty of tools, open source and paid, out there. One of my favourite views when looking at the main DevOps tools is the Periodic Table of DevOps from XebiaLabs. For this project, the client decided mainly to use Visual Studio Team Services + some non-Microsoft tools to cover a range of areas, including:

  • Project/Portfolio Management
  • Source Code Management
  • Build Management
  • Release Management
  • Test Management
  • Package Management

Back to our project, we carried out the following:

1. DevOps Healthcheck

As time is limited, before our DevOps engagement we sent out a pre-engagement questionnaire, where the development team (usually it will be the Dev Lead or Head of IT but other times it can be just the team sit for 20 minutes and fill it up) answer some questions about what patterns, practices and tools they are using in their team(s) in regards to Agile Development and Agile Delivery approaches. In this section, we are targeting questions related to the principal areas of DevOps, which include most of the main practices such as Source Code Management, Build Management, CI/CD pipelines, Quality Gates, Team Collaboration, Configuration and Provisioning, Cloud and others.

Once we have this information, the next step is to define the level of maturity in each area of DevOps and use it to draft a plan and accelerate its adoption.

2. Plans and Strategies definitions and reviews

DevOps is not just about bringing the trendiest and shiniest tools to the table, which allow teams to orchestrate each piece of the software delivery process faster, fully automated and successfully. No, it’s also about enabling the best practices in each of the SDLC areas, reviewing the current processes and reviewing how to optimize and collaborate on improving them.

Source Code Management:

Is the team using the right source code repository given the nature of the of the project? Does the repo allow the team to operate given their coding practices? Is the team practicing Code Reviews and Pull Request? Does the team have a specific branching strategy that allows them to deliver continuously, automate it and is easy to manage?

There are plenty of possibilities nowadays as we are finding teams using:

  • GIT
  • GITHub
  • TFVC
  • Subversion
  • Perforce
  • Share folders (!)

Some of them are very well-known for most development teams and are broadly adopted as they are easy to integrate into CI pipelines and to build systems, like git or GitHub. Others like Perforce are not as straightforward to work with, but they have some advantages over git such as storing large binary files.

Another aspect of the SCM model is how to split (or not split) the different projects into one or multiple repos. We found that some teams prefer to have multiple repositories for the same product which included a repo for different projects or service so it can be easily shared with external teams, or just to have a different business model associated and its own life cycle. Endless discussions about one repo vs multiple repos or to use one repo with multiple git modules need to be always supported by its business cases, as it is not only up to the developers how they want to organize the source code, everyone in the team will have something to say about it.

One Repo vs Multiple Repos:

  • Developers, Software Development Engineer in Tests and team members specialized in DevOps might prefer to have one repository only as it’s easy to:
    • access
    • branch out
    • generate build definitions
    • release in between branches
    • solve dependencies, etc.

But some can disagree and say they don’t want to have too much noise in the same repo having multiple projects in it, especially if they have several commits per day coming from different teams.

  • Business owners take a big part of this discussion as they know how those projects are going to
    • be released
    • why?
    • where?
    • for how long?
    • Is it going to be released as a SaaS, or PaaS or will just be a website.

If the product or products will be sold as a white label product and then comes with a different business model. Sometimes they want to sell the IP but not all. According to these business cases, the repo strategy will change.

Here are some interesting articles that can help you to understand the differences between single or multiple repositories:

https://www.benday.com/2016/11/04/one-tfs-build-multiple-git-repositories-with-submodules/

http://www.drmaciver.com/2016/10/why-you-should-use-a-single-repository-for-all-your-companys-projects/

http://blog.shippable.com/our-journey-to-microservices-and-a-mono-repository

https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext

 

Another critical part of the plan is the branching strategy and what is the purpose of those branches and most importantly, how changes are merged or delivered in between branches.

The strategy we chose for this particular project was quite simple as it is best to always try and keep it simple from the beginning, there is always time to add complexity later:

BranchingSimple

Some interesting links that can help you decide what branching strategy could be adequate for you:

https://guides.github.com/introduction/flow/
https://docs.microsoft.com/en-us/vsts/tfvc/branch-strategically
https://docs.microsoft.com/en-us/vsts/articles/effective-tfvc-branching-strategies-for-devops
https://docs.microsoft.com/en-us/vsts/git/concepts/git-branching-guidance
https://docs.microsoft.com/en-us/vsts/tfvc/branching-strategies-with-tfvc

 

Project/Portfolio Management

What is the current project management tool the team is using? Does it have full traceability end to end from the story conception and the code written for it, to the deployment of production systems? Are any tests attached to these stories? Does the team have full visibility to the work taken on each of the areas through this tool? Is the tool you have chosen able to implement properly the process practices which will follow? (Agile, Scrum, XP, Kanban, …)

Well, most of the teams lacked the ability to track the lifecycle of a story across the different development areas, we found most had the preference for Atlassian’s Jira, others favored TFS/VSTS or Target Process. The thing is, the importance is not how these tools are managing those stories (as more or less all of them allow you to do the very same things, just with a different look and feel) but how they track the work done to complete those stories.

For example, Atlassian’s Jira is capable of more than just creating issues on nice Kanban boards, it can also fully integrate with Github, so you can check the work done by the Dev team and it also has plugins to bring your test cases to them or even link them to the release pipelines for specific external tools.

In our case, we used Visual Studio Team Services as the main tool for project management. We migrated all the epics, stories and tasks from their legacy system (Target Process) to VSTS.

ProjectBacklogProjectDashboard

 

(images are taken from a local demo, not a real environment)

 

Build Management

What do we have to build? How often? Can we automate it? What technologies? What’s the output of it?

We need to define a plan around how we are going to build our software (input) and what is going to be the outcome of that. Factors which are needed to be taken into consideration are:

  • Build definitions: We need to think what do we want to build?, if we want to run tests on them and which ones?, if static code analysis will be part of them?, if we are going to build artefacts that will be deployed later on as a part of a CD pipeline, etc.
  • Projects types to build: Web, Services, Apps, DataBases, Scripts, Mobile, etc.
  • Frequency: how often do we have to run those builds? Is on demand? Do we have nightly builds? Are we implementing Continuous Integration at every branch level?
  • Outcome: What do we want to generate as a part of the build definition output? Are we creating packages, binaries or any kind of artifacts? Do we want to run a set of tests and analyze the results? Are we storing these packages in an artifact library or shared folder?
  • Artefacts: How are we versioning them? Are we creating Packages? How do we establish the quality of these artifacts?
  • Build tool: Jenkins, Bamboo, TeamCity, TFS/VSTS, others?
  • Build/Test/Deploy Agents: Can we host these agents locally or will be deployed in the cloud? How many do we need? Do they need different capabilities?

In our case we defined Automated Builds running on CI on every branch, filtering by projects paths, triggered by pull requests and also nightly builds.

BuildDeploy.png

 

 

The tool chosen for this was VSTS, as it provided all the above, integrated with SonarQube and was capable to build projects of different kinds and technologies.

BuildSQ

SQReport

 

Release Management

Defining the release strategy can take some time to define, as it’s not only about creating a Delivery pipeline, but it’s more about what needs to be released and when? if it will be released to the cloud or on-premises?, if this release is manual or automated and how to automate it if in case is manual? and how we can make this process repeatable?, etc.

Factors we take into consideration when defining the release strategy:

  • What do we have to release? It could be services, web, desktop, or mobile applications. It could be Infrastructure as Code such as deploying virtual machines, Docker containers, load balancers, it could even be databases!
  • How are we going to release it? How is this release generated? Is this process automated? Does it require any manual approval in any of the stages?
  • What steps need to be taken to release our artifacts? What quality metrics and quality gates are we adding to this process?
  • Do we have a rollback plan? Do we a have disaster recovery plan?
  • What environments I’m going to need for releasing my product? Which teams will use them? Will they be static or dynamically generated?
  • Will it be deployed locally or in the cloud?

In our case we had to deploy them all, web apps, services, databases, infrastructure, environments – all of it, and our target environment for those was Microsoft Azure. For this project, in particular, it was Azure SaaS such as Azure App Services, Azure Elastic Pools, and Azure API Management. There was a component of IaaS which was about virtual machines, network infrastructure, hybrid infrastructure, containers, etc. Which was more focused on interoperability with legacy systems.

For controlling these releases and deployments we used VSTS Release Management which also allowed us to enable Continuous Deployment and easily visualize which versions of our releases are being deployed and where.

Releases

 

Test Management and Quality Gates

I won’t go too much into detail on this as it I believe it should be saved for another post, but I can give you some insights about what the main quality gates and test management we took into consideration when reviewing the Test Management and Test Automation strategy followed by our client.

Priority number one was to move towards Agile practices and DevOps implementations and Automate all the testing all over the SDLC.

The main quality gates we proposed were:

  • At story conception level: Have agreed on a definition of done and for every story, a well-defined acceptance criteria, written on Gherkin syntax to help the dev and test engineers to implement test properly using BDD and TDD practices.
  • Code Reviews and Pull Requests on every merge operation.
  • Code Coverage: 100% code needs to be covered by unit tests.
  • Test Automation for UI, API, DataBase testing, Performance test, Smoke tests and others, all integrated the CI – CD pipelines at different stages, results automatically collected and asserted as Quality Gates.
  • Regression testing happening on nightly builds.
  • Code Analysis rules for coding practices when building.
  • SecOps practiced integrating SAST and DAST tools as part of the CI – CD pipelines

Manual testing is out of the discussion. But if you are not on a green field and still have to deal with manual testing (we had some legacy projects where manual testing was still a must) then VSTS also provides a nicely done solution for test management, which gets complimented with a Windows client (Microsoft Test Manager) and with Test and Feeback tool for browsers.

3. Implementation

After we agreed and signed off all the strategic  plans for:

  • Branching strategy
  • Repository strategy
  • Build strategy
  • Release strategy
  • Environments architecture
  • Test Automation strategy

And we discussed other topics such as SecOps, packages, monitoring and alerts, hotfixes approach, cloud costs, etc,  we started implementing on every area, starting by Repo, Branching and Build strategies.

We also have to note that because we spent a good 60% of the time on planning and strategies discussions, the whole implementation took no more than 30% of the time.

4. Adoption

For teams that never worked on a fully automated environment where lots of DevOps practices are applied, this can be difficult to absorb from one day to another. We solve this problem by having different members of the team, specialized on different areas of expertise, shadowing us during the implementation phase.

We also had one recorded KT session over every delivery plan and invite the whole team (and external teams too) to be part of those sessions.

And last but not least, we leave behind some training materials and guides to help them to fully develop their capabilities around the new DevOps tools and processes.

Some teams prefer us to organise some Agile Workshops for over 3 days adding some additional days for tools and technical Q&A sessions.

5. Support

Lastly, our modus operandi is to engage, prepare, plan, deploy, share, guide and finally give support to these teams for few weeks over the different areas of work, ensuring they are self-sufficient to start working on an agile manner with the new processes and tools. Then we periodically contact them to see how much they have matured on the different areas previously measured during our pre-engagement.

Summary

To summarise, it has been such a nice experience to participate in such a project, defining the whole DevOps strategy from the very beginning and see it flourish over the time.

We have made other journeys of different natures, different technologies but with mostly the same approach and same outcome.

Important to remark, that we shouldn’t focus really on the tools but on how to use them and why. In this blog post we have seen how the SDLC is organised mainly using VSTS, but MagenTys we have done similar journeys just with Jira, Bamboo, Jenkins, Fortify, Docker, Kubernetes, Terraform, Grafana and Prometheus.

And remember, DevOps is not a tool, is not a guide, is not a methodology, it’s a journey.

About Eduardo Ortega
Principal Engineer - Head of DevOps at MagenTys MCSE, MCSD, MCDBA, MCAD, MCSP, MCSTS, PSD, PSM, CSD, Splunk PU


No Comments

Leave a comment