1. Introduction
In this blog post / notebook, we want to take a look at how much information you can extract from a simple Git log output. We want to know
- where the developers come from
- on which weekdays the developers don’t work
- which developers are working on weekends
- what the normal working hours are
- if the is any sight of overtime periods.
We use Pandas as data analysis toolkit to accomplish these tasks for the big open source project IntelliJ.
2. Gaining the data
To get the necessary data, we clone the Git repository with
git clone https://github.com/JetBrains/intellij-community.git
and create a latin-1 encoded log file of the complete commit history with
git log --date=raw --encoding=LATIN-1 --pretty="%ad%x09%aN%x09%ae" > git_timestamp_author_email.log
on the command line.
This gives us a nice file with the following content:
1514469569 +0300 Kirill Kirichenko kirill.kirichenko@jetbrains.com
1514469402 +0100 Anna Kozlova anna.kozlova@jetbrains.com
1514469119 +0100 Vladimir Krivosheev vladimir.krivosheev@jetbrains.com
1514468066 +0100 Vladimir Krivosheev vladimir.krivosheev@jetbrains.com
1514462548 +0100 Vladimir Krivosheev vladimir.krivosheev@jetbrains.com
...
It includes the UNIX timestamp (in seconds since epoch), a whitespace, the time zone (where the authors live in), a tab separator, the name of the author, a tab and the email address of the author.
Note:
- We use the
--date=raw
option that returns the UNIX timestamp in the UTC time zone and the author’s time zone. We use the UNIX timestamp because Pandas can parse that data format very efficient. We need also the time zone because we want to know when each committer works at their local time. - We’ve created and implicitly used a
.mailmap
file for this task to map multiple author names and email addresses to the same person. You can find this file in this GitHub Gist. To use it, the file has to be put into the root of the Git repository before the execution of thegit log
command.
3. Wrangling the raw data
We import the data by using Pandas’ read_csv
function and the appropriate parameters.
import pandas as pd
raw = pd.read_csv(
r'../../intellij-community/git_timestamp_author_email.log',
sep="\t",
encoding="latin-1",
header=None,
names=['unix_timestamp', 'author', 'email'])
raw.head()
We need to split the information in unix_timestamp
into the separate columns timestamp
and timezone
.
raw[['timestamp', 'timezone']] = raw['unix_timestamp'].str.split(" ", expand=True)
raw.head()
We also create a new numeric column for the timezone offset for later calculations.
raw['timezone_offset'] = pd.to_numeric(raw['timezone']) / 100.0
raw.head()
To get a real time column, we convert the information in timestamp
accordingly.
raw['timestamp'] = pd.to_datetime(raw['timestamp'], unit="s")
raw.head()
We also calculate the local time of each commit by adding the timezone_offset
data to the timestamp
entries.
raw["timestamp_local"] = raw['timestamp'] + pd.to_timedelta(raw['timezone_offset'], unit='h')
raw.head()
At last, we copy only the needed data from the raw
dataset into the new DataFrame git_authors
.
git_authors = raw[['timestamp_local', 'timezone', 'author', 'email']].copy()
git_authors.head()
4. Refining the dataset
In this section, we add some additional information to the DataFrame
to accomplish our tasks.
4.1 Adding weekdays
First, we add the information about the weekdays based on the dayofweek
information of the timestamp_local
column. We round down possible non-integer values and extract the weekday name from the calender.day_name
list. Because we want to preserve the order of the weekdays, we convert the day
entries to a Categorial
data type, too.
import calendar
git_authors['day'] = git_authors["timestamp_local"].apply(lambda x : calendar.day_name[int(x.dayofweek)])
git_authors['day'] = pd.Categorical(git_authors['day'], categories=calendar.day_name, ordered=True)
git_authors.head()
4.2 Adding working hours
For the working hour analysis, we extract the hour information from the timestamp_local
columns.
git_authors['hour'] = git_authors['timestamp_local'].dt.hour
git_authors.head()
5. Analyzing the data
With the prepared git_authors
DataFrame, we are now able to deliver insights into the past years of development.
5.1 Developers’ timezones
First, we want to know where the developers roughly live. For this, we plot the values of the timezone
columns as a pie chart.
%matplotlib inline
git_authors['timezone'].value_counts().plot(kind='pie', figsize=(7,7), title="Developer's timezones")
5.2 Weekdays with the most commits
Next, we want to know on which days the developers are working during the week. We count by the weekdays but avoid sorting the results to keep the order along our categories.
commits_per_weekday = git_authors['day'].value_counts(sort=False)
commits_per_weekday
We plot the result as a standard bar chart.
ax = commits_per_weekday.plot(kind='bar', title="Commits per weekday")
ax.set_xlabel('weekday')
ax.set_ylabel('# commits')
Result:
Most of the commits occur during normal working days. There are just a few commits on weekends.
5.3 Working behavior on weekends
We take a look at the type of weekend workers. We want to see, if the commits come from the main developing company JetBrains or from other voluntary contributors that work on the open source project on weekends.
As an approximation, we use the domain name that is included in the author’s email addresses to decide if an author is an employee of JetBrains or not. We use a separate DataFrame weekend_workers
for this task.
weekend_workers = git_authors[['day', 'email']].copy()
weekend_workers['employee'] = weekend_workers['email'].str.lower().str.endswith("@jetbrains.com")
weekend_workers.head()
We group and count the weekdays and employee information and store the result in the new DataFrame commits_per_weekday_employee
.
commits_per_weekday_employee = weekend_workers.groupby(['day', 'employee']).count().unstack()
commits_per_weekday_employee
To be able to spot differences more easily, we calculate the ratio between the employed developers and all developers.
commits_per_weekday_employee['employed_ratio'] = \
commits_per_weekday_employee['email'][True] / \
commits_per_weekday_employee['email'].sum(axis=1)
commits_per_weekday_employee
We plot this new information in a second bar chart to see possible differences in the committing behavior between IntelliJ employees and other contributors.
ax = commits_per_weekday_employee['employed_ratio'].plot(
kind='bar', color='g', title="Ratio of commits from employed authors")
ax.set_xlabel("weekdays")
ax.set_xlabel("ratio")
Result
There is only a slight, non-significant difference between the ratio of employed and non-employed contributors on weekends.
5.4 Usual working hours
To find out about the working habits of the contributors, we group the commits by hour
and count the entries (in this case we choose author
) to see if there are any irregularities.
working_hours = git_authors.groupby(['hour'])['author'].count()
working_hours.head()
Again, we plot the results with a standard bar chart.
ax = working_hours.plot(kind='bar')
ax.set_title("Distribution of working hours")
ax.yaxis.set_label_text("# commits")
ax.xaxis.set_label_text("hour")
Result
The distribution of the working hours is a nice Gaussian distribution. It seems that this open source project is mainly developed by developers in their full-time jobs.
5.5 Signs of overtime
At last, we have a look at possible overtime periods. For this, we first group all commits on a weekly basis per authors. As grouping function, we choose max()
to get the hour where each author committed at latest per week.
latest_hour_per_week = git_authors.groupby(
[
pd.Grouper(key='timestamp_local', freq='1w'),
'author']
)[['hour']].max()
latest_hour_per_week.head()
Next, we want to know if there were any stressful time periods that forced the developers to work overtime over a longer period of time. We calculate the mean of all late stays of all authors for each week.
mean_latest_hours_per_week = latest_hour_per_week.reset_index().groupby('timestamp_local').mean()
mean_latest_hours_per_week.head()
We also create a trend line that shows how the contributors are working over the span of the past years. We use the polyfit
function from numpy
for this which needs a numeric index to calculate the polynomial coefficients later on.
numeric_index = range(0, len(mean_latest_hours_per_week.index.week))
numeric_index
We then calculate the coeffiecients with a three-dimensional polynomial based on the hours of the mean_latest_hours_per_week
DataFrame.
import numpy as np
coefficients = np.polyfit(numeric_index, mean_latest_hours_per_week.hour, 3)
coefficients
For visualization, we decrease the number of degrees and calculate the y-coordinates for all weeks that are encoded in numeric_index
.
polynomial = np.poly1d(coefficients)
ys = polynomial(numeric_index)
ys[:5]
At last, we plot the results of the mean_latest_hours_per_week
DataFrame as well as the trend line in one line plot.
ax = mean_latest_hours_per_week['hour'].plot(
figsize=(10, 6), color='k', alpha=0.2, title="Late hours per weeks")
ax.set_xlabel("time")
ax.set_ylabel("hour")
ax.plot(mean_latest_hours_per_week.index, ys)
Result
We can see that the late hours are decreasing over time (almost two hours during the last 12 years). There is no sign of regular death marches. In general, this could be a good sign, leading to a work-life balance.
6. Closing remarks
We’ve seen that various metrics and results can be easily created from a simple Git log output file. With Pandas, it’s possible to get to know the habits of the developers of software projects.
You can find the executable version of this article as Jupyter notebook on GitHub.
Pingback:Developers’ Habits (Linux Edition) – feststelltaste