The Compounding Effect of Version Control Performance

Pablo Santos Luaces

As software developers we tend to focus on the latest, shiniest new feature and often take for granted the core value of our product. 

I remember the surprise when visiting a customer and asking their team what they liked better about the product: every morning it syncs much faster than the previous version control. It wasn’t the latest change in the UI, the super advanced merge feature we just added. It was the speed. I took it for granted. I considered it something that of course was important but that has been there for years. And it was the real key thing.

And something often overlooked about speed is how it compounds and impacts productivity in ways that are initially not considered.

Diversion performance boost

During the last few months the team at Diversion has dramatically improved the speed of the product. They swarmed together to respond to the requirements of much larger customers. When you have to handle millions and millions of files in the head of the repo, the rivets start to pop.

To some extent, these teams need some performance to just operate. But in Diversion they have not only matched but vastly outperformed the version control they are replacing. And this creates new opportunities.

The compounding effect of speed

An improvement in the speed of your daily version control operations, sync or submit, for instance, makes you wait less. 

Every operation that takes more than just a few seconds makes you switch context, go to X or Insta or whatever, then ruin your flow.

So, if an operation goes down from, let’s say, 30 seconds to 10, the impact is not only 20 seconds times the number of operations per day, it is much more. The threshold between less than 10 seconds and more than 20 provokes minutes of time wasted browsing the internet, going to Slack to chat while you wait, etc.

But there is more: when an operation is not painfully slow, it creates new opportunities that go beyond saving time and into real workflow changes.

Fast branching changes the game

Take the branching example. One of the most used version control systems in the gaming industry can’t branch in a performant way as soon as the depots grow too large. Taking 5-20 minutes to create a branch makes it an event, something to be planned, considered, feared. It is not something you do lightly. And then entire workflows are shaped around that limitation (which is something very, very common in the gaming industry).

Now, suddenly your branching time goes from 5 minutes to 20 ms like it happens with Diversion. The naïve way to think about it is: ok, we create 1 branch a month, so we’re only saving 5 minutes a month, it is definitely not worth the change.

Obviously the impact is way different: suddenly the team can create as many branches as they want. If you want to work on a new ticket, you can go and create a branch for it, lightly, no planning, no approvals, because branches suddenly become an instrument of productivity, not a release event. You can use branches to develop tasks and they can be timely merged, but the review process will be driven by the branch (something Git users take for granted but many game teams do not because Git can’t handle the sizes of their binaries fluently).

Same can happen with release branches, experiment branches, ideas, etc. Fast branching opens up improvements that go beyond the current workflow.

Art and code in a single repo erases artificial complexity

And branching is just one example, but the same happens with the ability to handle code and art, no matter the size, in the same repo. Do you simply accept the extra complexity of dealing with code in one repo and art in a second one because the system that manages code is not good with big assets, and the one that deals with large binaries doesn’t offer the features programmers demand?

Wrapping up

Like a friend of mine always says for their data analytics platform: speed matters. It matters a lot. It does not just remove blockers, it creates whole new opportunities when you can suddenly perform operations that you accepted to be costly in almost no time.

About the Author

Pablo Santos Luaces is the founder and former CTO of Plastic SCM, SemanticMerge, and Gmaster. An accomplished programmer specializing in version control and merge technologies, he now shares his insights on these topics through writing and speaking, including on his Medium page.

Share Us