This article is part of the Gitalytics using Gitalytics series. In this series, we cover how our team uses Gitalytics to review how our team is doing, what our work patterns are, and for experimenting with processes.

Most software engineering or IT teams go through it. Crunch time to get out a feature, meet a deadline, recovering a crashed database, or for a million other reasons. Crunch happens. We're a data-focused and constantly innovating team, so naturally, when we went through a crunch period we had to dig in. This overview covers when we were preparing a beta launch, and how we used Gitalytics to see what effect it had on our churn. So here it is.

What happened when a company focused on software engineering insights had a crunch sprint?

First off, we tracked the crunch time event as "Overtime" in our dashboard.

As you can see, we have a few short durations with the category of "overtime" in the past few months, with the largest being the end of August and beginning of September. When we click View Event, we can see the event in more detail.

This is our actual Throughput (total amount of new and refactored code) through these periods. Looks like the team was highly productive right?

That's only if one is looking at throughput as the primary measurement of productivity. If we look at how effective that code was on an ongoing basis, we see a dramatic increase in rewriting of that code within three weeks of being committed.  The code status was identified in the same report, by just changing the filter of what was happening in the codebase.

Another exciting piece of this experiment is that we saw individuals re-writing their own code at a much higher rate than other kinds of re-writing others.

The writer's code being overwritten by themselves
Overwriting others code

What this tells us is the team became less effective per hour as they worked more hours compared to their normal pattern. Because the code generally wasn’t re-written by senior developers, it was adjusted by the author; there wasn't a skill or knowledge gap - there was a decrease in effectiveness.

We need to take this with a grain of salt of course because lines of code and the sheer amount of code retained isn't necessarily the best metric to use with development teams. What this can tell you is that when engineers are expected to "push through" way more hours to get a project done, you can objectively say the quality will decrease.

For larger teams, reviewing an experiment might including reviewing whether commit sizes and frequency changed out of team's agreed upon processes, if pull requests became stale or were self-reviewed, or if team members worked with people they didn't usually work with and had positive or negative experiences. We focused on this one metric for this experiment, but our customers use the platform for a variety of metrics.

If you're running process improvement or employee experiments, the Events feature in Gitalytics would be highly useful for your team. Contact us today to learn more about how Gitalytics can help you and your team.