While technologies companies keep transforming to “devops” manufacturing, this basically mean foster a cultural, technologies and tools changes. The market vendors offers tools for each station in the development road, from the point of product requirement to the place of delivering code to production, in most cases there is no one universal holistic solution.
Scale development requires strong solution that can gather all the CI/CD data and to produce useful devops performance insights include governance and audits. It is relatively uncontroversial that measuring performance in the domain of software is hard, unlike other manufacturing, here the outcomes are vague and change over time.
This become more essential and key element for devops when breaking the monolithic application to hundred of micorservices components that are developed for several products across many teams on multiple locations.
The need for tracking the pieces, knowing what is this specific micro-service, who owned it ? what was his life cycle, where/when certified ? which 3rd parties versions is contains ? which version has passes Integration functional/non functional tests, with which other components versions and on which environment topology ? are all examples for the challenges we faced when breaking the monolithic products.
This article describe my personal journey as a devops engineer at Amdocs for how we collect the distributed “software life cycle” data and create data-driven approach that help us to take the right decisions through the development phases.
Jenkins support build views, view for several branches build progress, blue Ocean (Jenkins extensions) is trying to improve that even further, some Jenkins Plugin provide projects builds monitoring.
SonarQube as a static code analysis tool and for unit test coverage has several dashboards that provide comprehensive and deep analysis/trends dashboards.
Hygieia, focus on devops aspects, aggregate data from several data sources such as source control (git), build process (Jenkins), Sonar scan and automated test. It provide several dimension insights into development process.
Although this examples tools can integrated it is still difficult or even impossible to connect all the pieces and have a higher level view of the “manufacturing floor”. Jenkins for builds is running on several distributed instances, the same for Sonar and for the others tools we used, it will get more complicated when IT is moving to cloud vendors.
More then that, on the culture side, development practices are unique to each company, or even with each team, it coupled to their individual culture, depend on a variety of perspectives and for especially big companies and there can’t be one unifrom solution that feet all.
3 years back (2017), Amdocs, an enterprise corporation company I’m working for started a major DevOps transformation process, the program called MS360, courageous and challenging decision was made to shift core products to micro-services using latest technologies stack.
Aiming for standardization and tracking for program success is not only devops key, it is extremely important for the development life-cycle. In the same time this tooling must give some freedom to adhere specific teams requirements.
What We Built — ”ms-catalog”
The below described the approach and the technologies we developed for collecting data quantities across the growing devops toolsets and for making this data to be a central source of truth.
This data-driven approach must support variety of metadata, in different formats from different vendors , that enable frequent changes and support flexible dashboards creations and customization (self-service).
Examples for CI-CD data :
- Component builds data: creation data for a specific ms version, build duration, build report from build server, artifacts location (Nexus), team that develop (owner), API docs and architecture doc locations.
- Code Quality, unit tests coverage, functional tests data, sonar analysis, SCM last changes, code change size.
- Component integrations and release data, version of other components that were certified with this version, the 3rd parties versions (DB, messages system, common foundation components), the release version that include the specific component version.
below is the high level diagram for our solution :
This architecture offer simple solution that is based on open source tools as Grafana and Elasticsearch. Most of our product logic and code is to build the “data dictionary”, collect and dump it to a central knowledge base.
For collecting data, the solution is using Jenkins “shared lib” that provide common layer which can be consumed from any Jenkins host and to plugged to a specific microservice build pipeline. Due to this central layer approach changes can be easily distrusted and scaled to any number of components.
Once data is collected the product is offer “zero db” that each team can take and customize plus one mngmt view that our central product team is define and control.
“Grafana allows you to query, explore, visualize, alerts and understand our metrics no matter where they are stored. It used as UI layer for a collection of dashboards and plugins that getting data and to present in multiple formats and metrics.
To sum up
You may have the right tools to make the process more efficient, to continuously improve and to create consistence of production lines. however, there is a need to gather important all this metadata in a one place and to mine it to useful information. This approach, even when using different tools, offer low cost central “BI” solution that mostly based on open source tools and ability to extend.
The next step
Moving ci/cd to multi cloud services from the in-house IT is around the corner. Consistence and a governance when developing on different Platform is additional key element for success for every application. Moreover, to enforce standard and to reject application that are not meet the company standard is a proactive approach that will unleash the power of this soluation.