Staying creative without changing your code

Tim Watts
7 min readSep 27, 2021

Recently at Volta we’ve been thinking about an interesting problem. Like all things, I’m sure it’s not a new problem but it was new to us.

Briefly, Volta is an app that enables musicians to create visual experiences to accompany their acoustic performances.
Performances that can be consumed traditionally (YT / Twitch etc) but also in VR & ultimately AR. There’s a lot of scope here but for the purposes of this article — the part that inspired me to write something — what does a “performance consumed in VR” mean for us?

In particular, we want our VR app to allow a user to experience past/archived performances.

Unlike Text/Audio/Video etc there’s actually a fairly unique challenge here. What does archived content for a performance look like? It’s clearly a combination of audio, video & data… but it’s also the code.

What’s more, in our VR app, of course we want you to be able to seamlessly transition from a performance put on a couple of years ago, to something live now.

So wait… this implies a pretty big decision. Some options we considered:

  1. Maybe we don’t need archived performances? Could we store them as pre-rendered 3D or even volumetric videos?
  2. Could every performance load a dll specific to the version it was created with?
  3. How about an ‘upgrader’ that runs on the data from an old performance and makes it compatible with our latest code?
  4. An app with all versions of our code ever released somehow happily living side-by-side with the current versions? The content somehow knows which code needs to execute?

Spoiler: We’ve gone for (4) and this post is a little about the why and the how.

The why

Option (1) seems like a pretty big cop-out to us. Aside from concerns over implementation, storage, quality etc — this not only takes away a lot from the original experience (freedom of movement, interaction etc) but also prohibits remixing of previous performances or other interesting ideas.

Option (2) is interesting but a) runtime loading a dll e.g. on Oculus or iOS sounds pretty challenging — especially in any kind of seamless way? b) wow that’s a lot redundant data — massive increase in waiting times and bandwidth. If this is a reasonable route, we lack the knowledge of how this could be done well.

As for (3), this is definitely more well-trodden ground and the option I’m sure some would explore further. So what exactly are we talking about here? Let’s make things more concrete.

Say we have some code (let’s call it a behaviour) that relies on some data:

public float amplitude = 1;
myCube.scale = new float3(1, amplitude, 1);

Imagine a recording of a performance where amplitude changes over time and is serialized in a save file.

Fast forward a month later and we’ve updated the code:

public float2 amplitudeXY = (1, 1);
public float damping = 0.1;
damping = math.clamp(0.1, 0.9, damping);
var smoothAmplitude = math.lerp(smoothAmplitude, amplitudeXY, damping);
myCube.scale = new float3(smoothAmplitude.x, smoothAmplitude.y, 1);

Without focusing too much on this psuedocode I hope it’s enough to illustrate the problem succinctly. How could you gracefully upgrade this? Not only would we need some incredibly powerful meta-level way of describing such transformations — no matter how good a job we did, it would never “look the same”. We’d also have to carefully write (and write tests for) the upgrader, every time we changed anything. Oof.

Suddenly we’ve arrived at one of my primary concerns. Resistance.

Resistance?

Usually, over time as a product matures:
• More features => more code, more complexity
• More content => more maintenance
• More users => changes are more disruptive

… and a lot more but you get the idea.
It becomes harder and harder to write new code and to delete or modify old code.

The problem is, that’s exactly what you need in order to create & innovate.

An increasing amount of time is spent on maintenance, testing and navigating a codebase.

Whilst inevitable, keeping the rate of increasing resistance as low as possible is, in my opinion, vital. Not just for productivity but for morale, sanity and especially creativity.

As a young start-up, one of our strengths is quick iteration time and the flexibility to experiment a lot. With a live product to support in the wild, worrying about whether any change will break something about a past performance would be akin to a deathknell in my opinion.

Back to the problem at hand

So we decided that in our case, conversion of the data isn’t going to cut it. Instead, we need different versions of the code — different code paths depending on which version was used in the performance at the time.

We have to feel free to change code and focus on being creative.

So option (4) it is. Tools that automatically help us generate new code paths (primarily behaviours) when required, all substantively modified versions living side-by-side.

The how

At Volta we’ve already taken a very component driven approach for our app. For the sake of this post (and as alluded to earlier), we call these components ‘behaviours’.

“Exploding Billboards” seen here is a Behaviour in Volta terminology

Ultimately in Unity land a behaviour is some MonoBehaviour that gets attached to an instance of a Prefab that modifies properties over time — be it shader variables, transforms etc.

These behaviours offer a great logical separation of ‘code that affects the way something looks’ from core product code and functionality. They encapsulate the data to be serialized and, combined with a version of the assets/textures/materials, how something “looks”. In our case, focusing our effort here accounts for the 95% of what makes content look and behave a certain way.

Approach in a nutshell

Every behaviour has a version. Every time we make a breaking* change to a behaviour we increment the version in the class and file name — essentially creating a new code path:

SomeBehaviour_v1, SomeBehaviour_v2 etc.

*In our case, breaking means that either a) the serialized data changes (in a non-additive way), b) the semantics or interpretation of the data changes — i.e. it looks or behaves differently. This leaves us room to fix bugs and the fixes apply to past performances so long as they wouldn’t be too disruptive.

So simple it’s kind of dumb?

Yeaaa… but.. in practice, there are multiple classes involved, an sdk abstraction layer and there’s a lot of potential for mistakes here.
Missed a file you should have duplicated/renamed?
Didn’t realise you made a “breaking” change to the serialized data?

We’re back to introducing resistance 😱.
It’s easier not to change things for fear of breaking something.

Luckily a lot of this can be massively improved through tooling.

Healthy scepticism

Making tools to automate duplicating & versioning files, detecting when there’s been a breaking change etc is no small task. It needs to be robust, not create undue resistance and importantly allow for manual intervention. Making robust tooling (esp in Unity) is a whole thing in itself.

Sufficed to say, nothing’s perfect and everything’s a trade-off but we’ve now got something that meets the above requirements and has substantially lowered resistance. I’m sure this could also be improved in the future by leveraging the coming Roslyn source generators.

All that to suggest duplicating files and automatically renaming them?

Well.. yup pretty much.

The tools do a lot of the heavy lifting and archive the versions out of our working directory. We’re left with just one file to work on (e.g. SomeBehaviour). It’s back to feeling as if we don’t have a live product and archived performances to worry about.

Want to rename something? Go for it.
Want to add another variable or feature? Sure!
Legacy? Just delete it!

So when do we create a new version?
Whenever the major version of the behaviour’s attribute [Version(1,0)] changes.

When does the attribute change?
When the tools detect a breaking change AND/OR when a developer changes the attribute manually.

If you do that on compile, won’t that generate 100’s of new versions as the behaviour is worked on?
This is actually pretty important. In our case we have a database which tracks which versions are currently in use.
If version 3.0 of a behaviour was never exposed to our users, there’s no need to increment beyond that, regardless of the changes.
This helps a lot to keep things sane and is related to a much larger aspect of the project — making asset bundles / addressables for our content.

I’ve simplified the issues and approaches mentioned here. Clearly there’s devil in the details — all of which will be specific to your own problem domain. In our case this includes things like deterministic randomness, what approach we take to versioning (or not) the rest of the codebase etc. Hopefully this post is taken as-is — as a bit of food for thought, rather than a detailed how-to!

--

--