Improving the Working Environment for Developers - In 3 Stages Only


… and incidentally establishing DevOps.

Looking back on three years of improving and optimising the working environment for developers in our company we have learned a lot. Since back then a whole lot of things have changed our mindset. What happened so far.

Stage 1: Actual State Analysis

Are you working in the Software Development Sector as an App Developer no matter which technology or an IT Project Manager?
Sure thing you have heard some of the following sentences from your customer or your manager.

"We need the latest version of the App asap to test it."

"The delivered version is weeks old. Can we update it asap?"

The App download link is dead.

Which Version needs to be built? What version do we need to deliver?

Can’t install the app, the certificate is not valid.”

Anyone with spare time to upload a new build?

Take a short break and feel your personal emotions those statements are triggering.

Is it pure pain? 
If not - you can stop reading here.

What comes to a Developer’s mind getting confronted with those statements regularly?

The exceptional wish of just automating the pains away. We technology enthusiasts like to automate everything everywhere because it’s part of our attitude to simplify our daily tasks — why not fixing those pains by automating the underlying processes.

In a perfect world I would wake up in the morning. Get my coffee, start developing the new feature, fixing some bugs from yesterdays tested version, grab another coffee… and finish my day by just watching the application to be build and delivered automagically — ready to test. Everyone gets notified about the new version — no need to write another slack message.

Wouldn’t that make our lives so much more comfy? In a perfect world I would wake up in the morning. Get my coffee, start developing the new feature, fixing some bugs from yesterdays tested version, grab another coffee… and finish my day by just watching the application to be build and delivered automagically – ready to test. Everyone gets notified about the new version – no need to write another slack message.

Back then we had a Build Server (call it Jenkins) already in place. It was used rarely. In your daily business as a developer you were sometimes confronted with and interrupted by this Jenkins which was introduced for everyone to be able to create application arftifacts without the need to do it on your development machine.

Building the Application

Once in a while a build was triggered manually because it was time again to deliver a new app version to the customer. Many obstacles were lying in this way. Sometimes you were not even assigned to the underlying development project of this application but the only one with a little spare time and luckily a little know-how about this big Jenkins monster. After finding the right Build Job for the project on the server and fixing many build issues, you ended up — usually after many hours of guessing, try and error— with a build artifact.

Downloading the Artifact

Relieved seeing the green light flashing after the successful build, your task was to download the right artifact to your machine for the following task.

Upload it somewhere to deliver it

Your next task was to find some information about the delivery process for this project. Where to upload it? What are the credentials for accessing the uploaded version?

After finding someone somewhere who did know where to upload the artifact and luckily had the right credentials you ended up with an uploaded app and finally sending a notification to the customer.

Happy me! Now: Back to work developing your current assigned project.

Conclusion

Delivery of a version was a matter of luck finding the right way through a jungle of obstacles. This can’t be the final stage. Do we have more possibilities utilising our Build Infrastructure?

Stage 2: Implementation

We started diving into the documentation of our Repository Manager (GitLab) and checked if we can get an easy integration to our Jenkins Server that triggers a new build if the code was updated. We found out about webhooks and commit triggers and did the setup on both ends.

Conclusion

We did it! With some simple steps we implemented Continuous Integration without knowing it and completed the first checkpoint on our track. We were now part of the CI Hype Train!

The advantages having CI for our project was obvious. After each code push we had a good (or bad) feeling if the application is still building on a sandbox environment and not just on our own local environments. Issues while building were anticipated in a very early stage which led to a less stressful life afterwards.

Fast forward to a couple of months later…, we established Continuous Integration in each new project and for a handful of existing projects and created our first best practices dealing with Jenkins. The stressful days before an application delivery phase reduced. No need anymore to manually trigger a Jenkins build on Friday afternoon and crossing fingers that the green traffic light will show up signalling a successful build. There was already a build artifact lying around on our Jenkins on friday afternoon just waiting to be uploaded to the file server from where it could be shared with the customer.

We asked us if we can get into the next stage of automating the upload to the ftp server and added some lines of code to our Jenkins Build Script which did the job.

Conclusion

Did we just reached the final stage of Continuous Delivery? Yes we did it! (at least we thought, that we did it back then)

The Application was now built after a push to our repo and after a successful build uploaded to the file server. Great!

We enjoyed watching this setup automagically upload one build after another. We introduced a proper versioning so the files on the server did not get overwritten and introduced a manual process of regularly doing a housekeeping on the server and deleting old apps.

Jenkins Pipeline Scripts from SCM

But as soon as it was introduced to other projects we saw bad things happen. Jenkins scripts were copied and pasted. Since the beginning this was never challenged and working until now — but now we suddenly had a logic in our script which was generic enough to be shared between jobs… the simple logic of uploading a file.

We started evaluating what are the best practices to share scripts between different jobs and got in contact with the neat way of pulling scripts from a Source Code Repository. Jenkins allows this for each job Pipeline Script. We proceeded with the creation of a git repository for all of our Pipeline Scripts. In the course of starting to create all new job scripts in this repo we also switched from plain bash scripts to groovy scripts and extracted reusable code in a separate utils script which we then imported in the pipeline specific files.

Introducing HockeyApp (now AppCenter)

We also searched for a more convenient way than our ftp server to provide App Downloads for iOS, Android and Windows platforms. HockeyApp (now AppCenter) was the solution for that. HockeyApp is a platform that offers an easy way to manage and distribute App binaries to Beta Testers. The switch allowed us to manage different tester groups (internal and external) and sped up the test cycles which noticeable improved the quality of each project.

After the refactoring phase and a lot of feedback from different departments we decided to improve the notification process for new releases. Luckily we were anyways improving our internal communication processes in the same time and introduced slack as our main communication tool. Slack was fitting perfectly in our scenario. We checked for integrations to Jenkins and quickly found a way to enrich our pipeline scripts with slack notifications. In the first step we sent notifications about the releases or build failures to the project slack channel. Which led to interruptions of ongoing discussions there. So we decided to have an additional notifications channel for each project which has a CD Pipeline in place and invited all project members to those channels.

Conclusion

With the addition of notifications we improved the project transparency significantly. Every project member was now aware about the project status at all times.

We managed to find a well fitting solution for each project participant which solved many manual steps with no decrease in the information flow.

Stage 3: Observe and Improve
We unknowingly did a lot of work in the Development Operations area. Back then when we started, the expression DevOps was not wide spread or at least did not reach us and it took us some time to realise what DevOps means for us. At least we now had a term to describe our tasks which was very handy for the internal communication.
Explaining the term DevOps and the work field rightis really challenging at least towards non-tech people. The definitions reach from “I’m a Text File Engineer” to “I automate the boring stuff….with magic.”
(https://www.reddit.com/r/devops/comments/awx6d7/can_anyone_describe_our_profession_to_nontech/)

As we all know, who stops being better stops being good. So we aim to iteratively improve our processes around our Continuous Delivery Pipelines.

Working Group

We introduced a DevOps working group including experts of different technology stacks to get more feedback as well as improving the company wide knowledge share about DevOps and the possibilities and advantages. As artifacts we are generating best practices for different technology stacks. The working group is slowly evolving into an internal service provider for all topics regarding improvement of development and release cycles.

Best Practices

We introduced generic job scripts and best practices for different build targets and platforms. This allows the creation of a new Pipeline for a new project in a short time by picking an existing job configuration and changing parameters to fit the new project.

Automated Style Checking and Testing

We integrated automated style checks and tests to improve our code quality and to create a common understanding between project members on code quality overall and that sharpened our definition of done. Many discussions were triggered about this topic.

Automated generation of the Code Documentation

We created scripts to automate the generation of the code documentation for our internal libraries. That itself was a complete topic on its own with many discussions and try and error iterations. I will recap this topic in another post.

Nightly Builds

We also introduced automatic nightly builds to early anticipate infrastructural failures in our build pipeline, third party integration problems and other errors which did not had their root cause in a breaking code commit. This was necessary for projects which were already rolled-out and had no ongoing development sprints— and therefore were not build regularly.

The feeling each morning seeing successful builds for all of your ongoing projects is just amazing and kickstarts your working day right.

Stage 1: Actual State Analysis

We analyzed the main pain points during the development phase of a application from the developers perspective and tried to solve as many of them as possible gradually.

Stage 2: Implementation

We customized some of our existing tools, introduced a couple of best practices and processes and managed to establish a working group over the time of three years.

Stage 3: Observe and Improve

We learned that there is a term for what we are doing — DevOps. We are still improving what we have implemented in Stage 2 to be able to adapt to new technologies, platforms and processes.

We see DevOps as the main tool to improve the working environment for developers. Reducing disturbance as much as possible by simultaneously improving project transparency and letting the developers focus on the essentials. And additionally getting application quality improvements for free.

The Continuous Delivery Implementation fits perfectly to our agile development approach with regular project reviews and sprints.